Delete date and time from text file - perl

I have a text file having date and time in below mentioned format at the 4th line of the file:
[0x1FFD] LOG 2017/02/22 06:20:48.644 Diagnostic Version Length: 0149 255
Now, I have to delete the string "2017/02/22 06:20:48.644"in the file.
This date and time is not constant and will change whenever I save the file(it takes the current date and time).
As I am not a perl coder , I am finding it difficult to figure out the way.
NOTE: I need to make changes in input file only. I don't need to create a seperate output file.
Thanks in advance!

use strict;
use warnings;
my $str = " [0x1FFD] LOG 2017/02/22 06:20:48.644 Diagnostic Version and more stuff";
$str =~ s|\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3}||;
print $str;
if it is in a file you need to loop through the file and print each line to exclude the date.
Like this
use strict;
use warnings;
open FILE, "<", "filename.log" or die $!;
my #list = <FILE>;
foreach my $str(#list) {
$str =~ s|\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3}||;
print $str;
}
close(FILE);
So from there you can figure out how to write it back to the original file. :)

You can use the Tie::File module to update the input file at single call.
use warnings;
use strict;
use Tie::File;
my $str = 'data1.txt';
tie my #lines, 'Tie::File', $str or die $!;
my $joinLines = join "\n", #lines;
Either use #1 or #2 for based on the regex modification
#1. $joinLines=~s/(LOG\s)(.*?)(\sDiagnostic Version)/$1$3/g;
#2. $joinLines =~ s/\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3}//;
print $joinLines;
#lines = split /\n/, $joinLines;
untie #lines;
Please check and test at your end.

Related

File not getting copied in perl

File "/root/actual" is not getting over written with content of "/root/temp" via perl script. If manually edited "/root/actual" is getting modified.
copy("/root/actual","/root/temp") or die "Copy failed: $!";
open(FILE, "</root/temp") || die "File not found";
my #lines = <FILE>;
close(FILE);
my #newlines;
foreach(#lines) {
$_ =~ s/$aref1[0]/$profile_name/;
push(#newlines,$_);
}
open(FILE, ">/root/actual") || die "File not found";
print FILE #newlines;
close(FILE);
File "/root/actual" is not getting over written with content of "/root/temp" via perl script. If manually edited "/root/actual" is getting modified.
Do you mean that /root/temp isn't being replaced by /root/actual? Or is /root/temp being modified as you wish, but it's not copying over /root/acutual at the end of your program?
I suggest that you read up on modern Perl programming practices. You need to have use warnings; and use strict; in your program. In fact, many people on this forum won't bother answering Perl questions unless use strict; and use warnings; are used.
Where is $aref1[0] coming from? I don't see #aref1 declared anywhere in your program. Or, for that matter $profile_name.
If you're reading in the entire file into a regular expression, there's no reason to copy it over to a temporary file first.
I rewrote what you had in a more modern syntax:
use strict;
use warnings;
use autodie;
use constant {
FILE_NAME => 'test.txt',
};
my $profile_name = "bar"; #Taking a guess
my #aref1 = qw(foo ??? ??? ???); #Taking a guess
open my $input_fh, "<", FILE_NAME;
my #lines = <$input_fh>;
close $input_fh;
for my $line ( #lines ) {
$line =~ s/$aref1[0]/$profile_name/;
}
open my $output_fh, ">", FILE_NAME;
print ${output_fh} #lines;
close $output_fh;
This works.
Notes:
use autodie; means you don't have to check whether files opened.
When I use a for loop, I can do inplace replacing in an array. Each item is a pointer to that entry in the array.
No need for copy or a temporary file since you're replacing the original file anyway.
I didn't use it here since you didn't, but map { s/$aref1[0]/$profile_name/ } #lines; can replace that for loop. See map.

How to store file content sentence by sentence in an array

I want to open a file, and store its content in an array and make changes to each sentence one at a time and then print the output of the file.
I have something like this:
open (FILE , $file);
my #lines = split('.' , <FILE>)
close FILE;
for (#lines) {
s/word/replace/g;
}
open (FILE, ">$file");
print FILE #lines;
close FILE;
For some reason, perl doesn't like this and won't output any content into the new file. It seems to not like me splitting up the array. Can someone give me an explanation why perl does this and a possible fix? Thanks!
split needs a regexp. Change split('.' , <FILE>) to split(/\./ , <FILE>)
Change my #lines = split('.' , <FILE>) to my #lines = split('\.' , <FILE>)
Only . is used in regex to match a single character. So you need to escape . to split on full stop.
#!/usr/local/bin/perl
use strict;
use warnings;
my $filename = "somefile.txt";
my $contents = do { local(#ARGV, $/) = $filename; <> };
my #lines = split '\.', $contents;
foreach(#lines){
#lines is an array which contains one sentence at each index.
}
what i found was in second line of your script missing semicolon(;) that is the error and also your script is not capable of handling content of entire file.It will process only one line. So please find the modification of your script below.If any clarification please let me know.
my $file='test.txt';#input file name
open (FILE , $file);
#my #lines = split('\.' ,<FILE>); this will not process the entire content of the file.
my #lines;
while(<FILE>) {
s/word/replace/g;
push(#lines,$_);
}
close FILE;
open (FILE, ">$file");
print FILE #lines;
close FILE;
You have lots of problems in your code.
my #lines = split('.' , <FILE>) will just read the first line and split it.
split('.' should be split(/\./
my #lines = split('.' , <FILE>) no semicolon terminator.
print FILE #lines; - you have lost all your full stops!
Finally I have to wonder why you are bothered about 'sentences' at all when you are just replacing one word. If you really want to read one sentence at a time (presumably to do some kind of sentence based processing) then you need to change the record separator variable $\. For example:
#!/usr/bin/perl
use strict;
use warnings;
my $file = "data.txt";
open (FILE , $file);
my #buffer;
$/ = '.'; # Change the Input Separator to read one sentence at a time.
# Be careful though, it won't work for questions ending in ?
while ( my $sentence = <FILE> ) {
$sentence =~ s/word/replace/g;
push #buffer, $sentence;
}
close FILE;
.. saving to file is left for you to solve.
However if you just want to change the strings you can read the whole file in one gulp by setting $/ to undef. Eg:
#!/usr/bin/perl
use strict;
use warnings;
my $file = "data.txt";
open (FILE , $file);
$/ = undef; # Slurp mode!
my $buffer = <FILE>;
close FILE;
$buffer =~ s/word/replace/g;
open (FILE, ">$file");
print FILE $buffer;
close FILE;
If you are really looking to process sentences and you want to get questions then you probably want to slurp the whole file and then split it, but use a capture in your regex so that you don't lose the punctuation. Eg:
!/usr/bin/perl
use strict;
use warnings;
my $file = "data.txt";
open (FILE , $file);
$/ = undef; # slurp!
my $buffer = <FILE>;
close FILE;
open (FILE, ">$file" . '.new'); # Don't want to overwrite my input.
foreach my $sentence (split(/([\.|\?]+)/, $buffer)) # split uses () to capture punctuation.
{
$sentence =~ s/word/replace/g;
print FILE $sentence;
}
close FILE;

Increment variables in files

I am a complete rookie with Perl. What I am trying to do is to open a list of files, increment three different variables in each file, save the files, and close.
The variables look like this
This_Is_My_Variable03
This_Is_My_Variable02
This_Is_My_Variable01
The variable ending in 01 is in the file multiple times. The variables are at times part of a Character string. the This_Is_My_Variable part of the variable never changes.
Thanks.
This may not be the best solution but it works
#!perl=C:\IBM\RationalSDLC\ClearCase\bin\perl
use warnings;
use strict;
use Tie::File;
tie my #data, 'Tie::File', 'myfile.txt' or die $!;
s/(This_Is_My_Variable)(\d+)+/$1.++($_=$2)/eg for #data;
untie #data;
Thank you Borodin for getting me started with Tie::File: that definitely helped.
Second solution using while loop
#!perl=C:\IBM\RationalSDLC\ClearCase\bin\perl
use warnings;
#use strict;
sub inc {
my ($num) = #_;
++$num;
}
open(FILE, "myfile.txt") || die $!;
$i = 0;
while (<FILE>) {
$string = $_;
if (/This_Is_My_Variable../) {
$string =~ s/(This_Is_My_Variable)(\d+)+/$1.++($_=$2)/eg;
print "$string \n";
$i++;
}
else {
print "$string \n";
}
}
close FILE;
Your "Second solution using while loop" has a number of problems.
Never disable use strict to get a program working. All that does is hide problems in your code
You have an unused subroutine inc and an unused variable $i
You should always use the three-parameter form of open, and lexical file handles
There is no need to test whether the line contains a string before applying a substitution
You can simply use ($2+1) in your replacement string, rather than assigning the value to $_ and incrementing it with ++($_=$2)
If you are going to use a named variable for the lines read from the file, then generally you should use while (my $string = <$fh>) {...}. For short blocks like this it is better just to use $_
You don't chomp the input, which would be fine except you are printing an additional space and newline after each line
You have print "$string \n" in your code twice. It may as well appear just once outside the if structure
This code performs the same process. I hope it helps.
use strict;
use warnings;
open(my $fh, '<', 'myfile.txt') || die $!;
while (<$fh>) {
s/(This_Is_My_Variable)(\d+)/$1.($2+1)/eg;
print;
}
use strict;
use warnings;
use Tie::File;
tie my #data, 'Tie::File', 'myfile' or die $!;
s/(\d+)$/sprintf '%02d', $1+1/e for #data;

Perl's Chomp: Chomp is removing the whole word instead of the newline

I am facing issues with perl chomp function.
I have a test.csv as below:
col1,col2
vm1,fd1
vm2,fd2
vm3,fd3
vm4,fd4
I want to print the 2nd field of this csv. This is my code:
#!/usr/bin/perl -w
use strict;
my $file = "test.csv";
open (my $FH, '<', $file);
my #array = (<$FH>);
close $FH;
foreach (#array)
{
my #row = split (/,/,$_);
my $var = chomp ($row[1]); ### <<< this is the problem
print $var;
}
The output of aboe code is :
11111
I really don't know where the "1" is comming from. Actually, the last filed can be printed as below:
foreach (#array)
{
my #row = split (/,/,$_);
print $row[1]; ### << Note that I am not printing "\n"
}
the output is:
vm_cluster
fd1
fd2
fd3
fd4
Now, i am using these field values as an input to the DB and the DB INSERT statement is failing due this invisible newline. So I thought chomp would help me here. instead of chomping, it gives me "11111".
Could you help me understand what am i doing wrong here.
Thanks.
Adding more information after reading loldop's responce:
If I write as below, then it will not print anything (not even the "11111" output mentioned above)
foreach (#array)
{
my #row = split (/,/,$_);
chomp ($row[1]);
my $var = $row[1];
print $var;
}
Meaning, chomp is removing the last string and the trailing new line.
The reason you see only a string of 1s is that you are printing the value of $val which is the value returned from chomp. chomp doesn't return the trimmed string, it modifies its parameter in-place and returns the number of characters removed from the end. Since it always removes exactly one "\n" character you get a 1 output for each element of the array.
You really should use warnings instead of the -w command-line option, and there is no reason here to read the entire file into an array. But well done on using a lexical filehandle with the three-parameter form of open.
Here is a quick refactoring of your program that will do what you want.
#!/usr/bin/perl
use strict;
use warnings;
my $file = 'test.csv';
open my $FH, '<', $file or die qq(Unable to open "$file": $!);
while (<$FH>) {
chomp;
my #row = split /,/;
print $row[1], "\n";
}
although, it is my fault at the beginning.
chomp function return 1 <- result of usage this function.
also, you can find this bad example below. but it will works, if you use numbers.
sometimes i use this cheat (don't do that! it is my bad-hack code!)
map{/filter/ && $_;}#all_to_filter;
instead of this, use
grep{/filter/}#all_to_filter;
foreach (#array)
{
my #row = split (/,/,$_);
my $var = chomp ($row[1]) * $row[1]; ### this is bad code!
print $var;
}
foreach (#array)
{
my #row = split (/,/,$_);
chomp ($row[1]);
my $var = $row[1];
print $var;
}
If you simply want to get rid of new lines you can use a regex:
my $var = $row[1];
$var=~s/\n//g;
So, I was quite frustrated with this easy looking task bugging me for the whole day long. I really appreciate everyone who responded.
Finaly I ended up using Text::CSV perl module and then calling each of the CSV field as array reference. There was no need left to run the chomp after using Text::CSV.
Here is the code:
#!/usr/bin/perl
use warnings;
use strict;
use Text::CSV;
my $csv = Text::CSV->new ( { binary => 1 } ) # should set binary attribute.
or die "Cannot use CSV: ".Text::CSV->error_diag ();
open my $fh, "<:encoding(utf8)", "vm.csv" or die "vm.csv: $!";
<$fh>; ## this is to remove the column headers.
while ( my $row = $csv->getline ($fh) )
{
print $row->[1];
}
and here is hte output:
fd1fd2fd3fd4
Later i was pulled these individual values and inserted into the DB.
Thanks everyone.

Getting unique random line (at each script run) from an text file with perl

Having an text file like the next one called "input.txt"
some field1a | field1b | field1c
...another approx 1000 lines....
fielaNa | field Nb | field Nc
I can choose any field delimiter.
Need a script, what at every discrete run will get one unique (never repeated) random line from this file, until used all lines.
My solution: I added one column into a file, so have
0|some field1a | field1b | field1c
...another approx 1000 lines....
0|fielaNa | field Nb | field Nc
and processing it with the next code:
use 5.014;
use warnings;
use utf8;
use List::Util;
use open qw(:std :utf8);
my $file = "./input.txt";
#read all lines into array and shuffle them
open(my $fh, "<:utf8", $file);
my #lines = List::Util::shuffle map { chomp $_; $_ } <$fh>;
close $fh;
#search for the 1st line what has 0 at the start
#change the 0 to 1
#and rewrite the whole file
my $random_line;
for(my $i=0; $i<=$#lines; $i++) {
if( $lines[$i] =~ /^0/ ) {
$random_line = $lines[$i];
$lines[$i] =~ s/^0/1/;
open($fh, ">:utf8", $file);
print $fh join("\n", #lines);
close $fh;
last;
}
}
$random_line = "1|NO|more|lines" unless( $random_line =~ /\w/ );
do_something_with_the_fields(split /\|/, $random_line))
exit;
It is an working solution, but not very nice one, because:
the line order is changing at each script run
not concurrent script-run safe.
How to write it more effective and more elegantly?
What about keeping a shuffled list of the line numbers in a different file, removing the first one each time you use it? Some locking might be needed to asure concurent script-run safety.
From perlfaq5.
How do I select a random line from a file?
Short of loading the file into a database or pre-indexing the lines in
the file, there are a couple of things that you can do.
Here's a reservoir-sampling algorithm from the Camel Book:
srand;
rand($.) < 1 && ($line = $_) while <>;
This has a significant advantage in space over reading the whole file
in. You can find a proof of this method in The Art of Computer
Programming, Volume 2, Section 3.4.2, by Donald E. Knuth.
You can use the File::Random module which provides a function for that
algorithm:
use File::Random qw/random_line/;
my $line = random_line($filename);
Another way is to use the Tie::File module, which treats the entire
file as an array. Simply access a random array element.
All Perl programmers should take the time to read the FAQ.
Update: To get a unique random line each time you're going to have to store state. The easiest way to store the state is to remove the lines that you've used from the file.
This program uses the Tie::File module to open your input.txt file as well as an indices.txt file.
If indices.txt is empty then it is initialised with the indices of all the records in input.txt in a shuffled order.
Each run, the index at the end of the list is removed and the corresponding input record displayed.
use strict;
use warnings;
use Tie::File;
use List::Util 'shuffle';
tie my #input, 'Tie::File', 'input.txt'
or die qq(Unable to open "input.txt": $!);
tie my #indices, 'Tie::File', 'indices.txt'
or die qq(Unable to open "indices.txt": $!);
#indices = shuffle(0..$#input) unless #indices;
my $index = pop #indices;
print $input[$index];
Update
I have modified this solution so that it populates a new indices.txt file only if it doesn't already exist and not, as before, simply when it is empty. That means a new sequence of records can be printed simply by deleting the indices.txt file.
use strict;
use warnings;
use Tie::File;
use List::Util 'shuffle';
my ($input_file, $indices_file) = qw( input.txt indices.txt );
tie my #input, 'Tie::File', $input_file
or die qq(Unable to open "$input_file": $!);
my $first_run = not -f $indices_file;
tie my #indices, 'Tie::File', $indices_file
or die qq(Unable to open "$indices_file": $!);
#indices = shuffle(0..$#input) if $first_run;
#indices or die "All records have been displayed";
my $index = pop #indices;
print $input[$index];