Im working on one last perl script to update my /etc/hosts file, but am stuck and wondered if someone can help please?
I have a text file with an IP in it, and need to have my perl script read this, which iv done, but now im stuck on updating the /etc/hosts file.
here is my script so far:
#!/usr/bin/perl
use strict;
my $ip_to_update;
$ip_to_update = `cat /web_root/ip_update/ip_update.txt | awk {'print \$5'}` ;
print "ip = $ip_to_update";
I then need to find an entry in /etc/hosts like
remote.host.tld 192.168.0.20
so i know i need to parse it for remote.host.tld and then replace the second bit, but because the ip wont be the same i cant just do a straight replace.
Can anyone help with the last bit please as im stuck :(
Thankyou!
Your substitution will look like this:
s#^.*\s(remote\.host\.tld)\s*$#$ip_to_update\t$1#
Replacement can be done in one line:
perl -i -wpe "BEGIN{$ip=`awk {'print \$5'} /web_root/ip_update/ip_update.txt`} s#^.*\s(remote\.host\.tld)\s*$#$ip\t$1#"'
Ok, I updated my script to include the file edit etc all in one. Might not be the best way to do it, but it works :)
#!/usr/bin/perl
use strict;
use File::Copy;
my $ip_to_update; # IP from file
my $fh_r; # File handler for reading
my $fh_w; # File handler for writing
my $file_read = "/etc/hosts"; # File to read in
my $file_write = "/etc/hosts.new"; # File to write out
my $file_backup = "/etc/hosts.bak"; # File to copy original to
# Awks the IP from text file
$ip_to_update = `/bin/awk < /web_root/ip_update/ip_update.txt {'print \$5'}` ;
# Open File Handlers
open( $fh_r, '<', $file_read ) or die "Can't open $file_read: $!";
open( $fh_w, '>', $file_write ) or die "Can't open $file_write: $!";
while ( my $line = <$fh_r> )
{
if ( $line =~ /remote.host.tld/ )
{
#print $fh_w "# $line";
}
else
{
print $fh_w "$line";
}
}
chomp($ip_to_update); # Remove newlines
print $fh_w "$ip_to_update remote.host.tld\n";
# Prints out new line with new ip and hostname
# Close file handers
close $fh_r;
close $fh_w;
move("$file_read","$file_backup"); # Moves original file to .bak
move("$file_write","$file_read"); # Moves new file to original file loaction
Related
I'm trying to write a perl script where I'm trying to save whole contents of those files which contain a specific string 'PYAG_GENERATED', in a single .txt/.tmp file one after another. These file names are in a specific pattern and this pattern is 'output_nnnn.txt' where nnnn is 0001,0002 and so on. But I don't know how many number of files are present with this 'output_nnnn.txt' name.
I'm new in perl and I don't know how I can resolve this issue to get the output correctly. Can anyone help me. Thanks in advance.
I've tried to write perl script in different ways but nothing is coming in output file. I'm giving here one of those I've tried. 'new_1.txt' is the new file where I want to save the expected output and "PYAG_GENERATED" is that specific string I'm finding for in the files.
open(NEW,">>new_1.txt") or die "could not open:$!";
$find2="PYAG_GENERATED";
$n='0001';
while('output_$n.txt'){
if(/find2/){
print NEW;
}
$n++;
}
close NEW;
I expect that the output file 'new_1.txt' will save the whole contents of the the files(with filename pattern 'output_nnnn.txt') which have 'PYAG_GENERATED' string at least once inside.
Well, you tried I guess.
Welcome to the wonderful world of Perl where there are always a dozen ways of doing X :-) One possible way to achieve what you want. I put in a lot of comments I hope are helpful. It's also a bit verbose for the sake of clarity. I'm sure it could be golfed down to 5 lines of code.
use warnings; # Always start your Perl code with these two lines,
use strict; # and Perl will tell you about possible mistakes
use experimental 'signatures';
use File::Slurp;
# this is a subroutine/function, a block of code that can be called from
# somewhere else. it takes to arguments, that the caller must provide
sub find_in_file( $filename, $what_to_look_for )
{
# the open function opens $filename for reading
# (that's what the "<" means, ">" stands for writing)
# if successfull open will return we will have a "file handle" in the variable $in
# if not open will return false ...
open( my $in, "<", $filename )
or die $!; # ... and the program will exit here. The variable $! will contain the error message
# now we read the file using a loop
# readline will give us the next line in the file
# or something false when there is nothing left to read
while ( my $line = readline($in) )
{
# now we test wether the current line contains what
# we are looking for.
# the index function gives us the index of a string within another string.
# for example index("abc", "c") will give us 3
if ( index( $line, $what_to_look_for ) > 0 )
{
# we found what we were looking for
# so we don't need to keep looking in this file anymore
# so we must first close the file
close( $in );
# and then we indicate to the caller the search was a successfull
# this will immedeatly end the subroutine
return 1;
}
}
# If we arrive here the search was unsuccessful
# so we tell that to the caller
return 0;
}
# Here starts the main program
# First we get a list of files
# we want to look at
my #possible_files = glob( "where/your/files/are/output_*.txt" );
# Here we will store the files that we are interested in, aka that contain PYAG_GENERATED
my #wanted_files;
# and now we can loop over the files and see if they contain what we are looking for
foreach my $filename ( #possible_files )
{
# here we use the function we defined earlier
if ( find_in_file( $filename, "PYAG_GENERATED" ) )
{
# with push we can add things to the end of an array
push #wanted_files, $filename;
}
}
# We are finished searching, now we can start adding the files together
# if we found any
if ( scalar #wanted_files > 0 )
{
# Now we could code that us ourselves, open the files, loop trough them and write out
# line by line. But we make life easy for us and just
# use two functions from the module File::Slurp, which comes with Perl I believe
# If not you have to install it
foreach my $filename ( #wanted_files )
{
append_file( "new_1.txt", read_file( $filename ) );
}
print "Output created from " . (scalar #wanted_files) . " files\n";
}
else
{
print "No input files\n";
}
use strict;
use warnings;
my #a;
my $i=1;
my $find1="PYAG_GENERATED";
my $n=1;
my $total_files=47276; #got this no. of files by writing 'ls' command in the terminal
while($n<=$total_files){
open(NEW,"<output_$n.txt") or die "could not open:$!";
my $join=join('',<NEW>);
$a[$i]=$join;
#print "$a[10]";
$n++;
$i++;
}
close NEW;
for($i=1;$i<=$total_files;$i++){
if($a[$i]=~m/$find1/){
open(NEW1,">>new_1.tmp") or die "could not open:$!";
print NEW1 $a[$i];
}
}
close NEW1;
I'm new in Perl, so it's maybe a very basic case that i still can't understand.
Case:
Program tell user to types the file name.
User types the file name (1 or more files).
Program read the content of file input.
If it's single file input, then it just prints the entire content of it.
if it's multi files input, then it combines the contents of each file in a sequence.
And then print result to a temporary new file, which located in the same directory with the program.pl .
file1.txt:
head
a
b
end
file2.txt:
head
c
d
e
f
end
SINGLE INPUT program ioSingle.pl:
#!/usr/bin/perl
print "File name: ";
$userinput = <STDIN>; chomp ($userinput);
#read content from input file
open ("FILEINPUT", $userinput) or die ("can't open file");
#PRINT CONTENT selama ada di file tsb
while (<FILEINPUT>) {
print ; }
close FILEINPUT;
SINGLE RESULT in cmd:
>perl ioSingle.pl
File name: file1.txt
head
a
b
end
I found tutorial code that combine content from multifiles input but cannot adapt the while argument to code above:
while ($userinput = <>) {
print ($userinput);
}
I was stucked at making it work for multifiles input,
How am i suppose to reformat the code so my program could give result like this?
EXPECTED MULTIFILES RESULT in cmd:
>perl ioMulti.pl
File name: file1.txt file2.txt
head
a
b
end
head
c
d
e
f
end
i appreciate your response :)
A good way to start working on a problem like this, is to break it down into smaller sections.
Your problem seems to break down to this:
get a list of filenames
for each file in the list
display the file contents
So think about writing subroutines that do each of these tasks. You already have something like a subroutine to display the contents of the file.
sub display_file_contents {
# filename is the first (and only argument) to the sub
my $filename = shift;
# Use lexical filehandl and three-arg open
open my $filehandle, '<', $filename or die $!;
# Shorter version of your code
print while <$filehandle>;
}
The next task is to get our list of files. You already have some of that too.
sub get_list_of_files {
print 'File name(s): ';
my $files = <STDIN>;
chomp $files;
# We might have more than one filename. Need to split input.
# Assume filenames are separated by whitespace
# (Might need to revisit that assumption - filenames can contain spaces!)
my #filenames = split /\s+/, $files;
return #filenames;
}
We can then put all of that together in the main program.
#!/usr/bin/perl
use strict;
use warnings;
my #list_of_files = get_list_of_files();
foreach my $file (#list_of_files) {
display_file_contents($file);
}
By breaking the task down into smaller tasks, each one becomes easier to deal with. And you don't need to carry the complexity of the whole program in you head at one time.
p.s. But like JRFerguson says, taking the list of files as command line parameters would make this far simpler.
The easy way is to use the diamond operator <> to open and read the files specified on the command line. This would achieve your objective:
while (<>) {
chomp;
print "$_\n";
}
Thus: ioSingle.pl file1.txt file2.txt
If this is the sole objective, you can reduce this to a command line script using the -p or -n switch like:
perl -pe '1' file1.txt file2.txt
perl -ne 'print' file1.txt file2.txt
These switches create implicit loops around the -e commands. The -p switch prints $_ after every loop as if you had written:
LINE:
while (<>) {
# your code...
} continue {
print;
}
Using -n creates:
LINE:
while (<>) {
# your code...
}
Thus, -p adds an implicit print statement.
I have a text file having data in below mentioned format..
#rectype='ABC' #recname='123' #rec_id='1K2j' etc...
#rectype='DEF' #recname='matin' #rec_id='458i' etc...
#rectype='ABC' #recname='John' #rec_id='lom0' etc...
#rectype='GHI' #recname='Kalme, #rec_id='pl90' etc...
#rectype='KLM' #recname='Kitty' #rec_id='987k' etc...
#rectype='ABC' #recname='OMR' #rec_id='lo09' etc...
Now, I have to delete all the lines having #rectype='ABC'..there are multiple lines of this kind in the input file.It's a kind of urgent and as I am not a perl coder , I am finding it difficult to figure out the way.
Please suggest!!!
NOTE: I need to make changes in input file only. I don't need to create a seperate output file.
You don't need to do it in Perl. You can use the grep tool.
grep -v "#rectype='ABC'" input_file > output_file
grep -v means "Print every line that does not match this expression."
perl -i -ne 'print if !/\#rectype = \047ABC\047/x' text_file
#!/usr/bin/perl
use warnings;
use strict;
use File::Slurp;
my $output = 'output.txt';
open my $outfile, '>', $output or die "Can't write to $output: $!";
my #array = read_file('input.txt');
for (#array){
next if ($_ =~ /^\#rectype='ABC'/);
print $outfile $_ ;
}
Output (saved to 'output.txt'):
#rectype='DEF' #recname='matin' #rec_id='458i' etc...
#rectype='GHI' #recname='Kalme, #rec_id='pl90' etc...
#rectype='KLM' #recname='Kitty' #rec_id='987k' etc...
I'm writing a Perl script to run an external program on every file in a directory. This program converts files from one format to another. Here's the deal...
When I run the program from the command line, everything works as it should:
computer.name % /path/program /inpath/input.in /outpath/output.out
converting: /inpath/input.in to /outpath/output.out
computer.name %
Here's the code I wrote to convert all files in a directory (listed in "file_list.txt"):
#!/usr/bin/perl -w
use warnings;
use diagnostics;
use FileHandle;
use File::Copy;
# Set simulation parameters and directories
#test_dates = ("20110414");
$listfile = "file_list.txt";
$execname = "/path/program";
foreach $date (#test_dates)
{
# Set/make directories
$obs_file_dir = "inpath";
$pred_file_dir = "outpath";
mkdir "$pred_file_dir", 0755 unless -d "$pred_file_dir";
# Read input file names to array
$obs_file_list = $obs_file_dir . $listfile;
open(DIR, $obs_file_list) or die "Could not open file!";
#obs_files = <DIR>;
close(DIR);
# Convert and save files
foreach $file (#obs_files)
{
$file =~ s/(\*)//g;
$infile = $obs_file_dir . $file;
$outfile = $pred_file_dir . $file;
$outfile =~ s/in/out/g;
print $infile . "\n";
#arg_list = ($execname, $infile, $outfile);
system(#arg_list);
}
}
The output shows me the following error for every file in the list:
computer.name % perl_script_name.pl
/inpath/input.in
converting: /inpath/input.in to /outpath/output.out
unable to find /inpath/input.in
stat status=-1
error while processing the product
I verified every file is in the proper place and have no idea why I am getting this error. Why can't the files be found? When I manually pass the arguments using the command line, no problem. When I pass the arguments through a variable via a system call, they can't be found even though the path and file names are correct.
Your advice is greatly appreciated!
Your list of files (#obs_files) comes from reading in a file via #obs_files = <DIR>;
When you do that, each element of array will be a line from a file (e.g. directory listing), with the line being terminated by a newline character.
Before using it, you need to remove the newline character via chomp($file).
Please note that s/(\*)//g; does NOT remove that trailing newline!
I want to run and executable ./runnable on argument input.afa. The standard input to this executable is through a file finalfile. I was earlier trying to do the same using a bash script, but that does not seem to work out. So I was wondering whether Perl provides such functionality. I know I can run the executable with its argument using backticks or system() call. Any suggestions on how to give standard input through file.
_ UPDATE _
As I said I had written a bash script for the same. I not sure how to go about doing it in Perl. The bash script I wrote was:
#!/bin/bash
OUTFILE=outfile
(
while read line
do
./runnable input.afa
echo $line
done<finalfile
) >$OUTFILE
The data in standard input file is as follows, where each line correspond to one time input. So if there are 10 lines then the executable should run 10 times.
__DATA__
2,9,2,9,10,0,38
2,9,2,10,11,0,0
2,9,2,11,12,0,0
2,9,2,12,13,0,0
2,9,2,13,0,1,4
2,9,2,13,3,2,2
2,9,2,12,14,1,2
If I understood your question correctly, then you are perhaps looking for something like this:
# The command to run.
my $command = "./runnable input.afa";
# $command will be run for each line in $command_stdin
my $command_stdin = "finalfile";
# Open the file pointed to by $command_stdin
open my $inputfh, '<', $command_stdin or die "$command_input: $!";
# For each line
while (my $input = <$inputfh>) {
chomp($input); # optional, removes line separator
# Run the command that is pointed to by $command,
# and open $write_stdin as the write end of the command's
# stdin.
open my $write_stdin, '|-', $command or die "$command: $!";
# Write the arguments to the command's stdin.
print $write_stdin $input;
}
More info about opening commands in the documentation.
Perl code:
$stdout_result = `exescript argument1 argument2 < stdinfile`;
Where stdinfile holds the data you want to be passed through stdin.
edit
The clever method would be to open stdinfile, tie it via select to stdin, and then execute repeatedly. The easy method would be to put the data you want to pass through in a temp file.
Example:
open $fh, "<", "datafile" or die($!);
#data = <$fh>; #sucks all the lines in datafile into the array #data
close $fh;
foreach $datum (#data) #foreach singluar datum in the array
{
#create a temp file
open $fh, ">", "tempfile" or die($!);
print $fh $datum;
close $fh;
$result = `exe arg1 arg2 arg3 < tempfile`; #run the command. Presumably you'd want to store it somewhere as well...
#store $result
}
unlink("tempfile"); #remove the tempfile