How to add time and replace it in a file in Perl? - perl

i have wriiten the following code to fetch date from server and to display it in yy/mm/dd-hh/mm/ss format.
#!/usr/bin/perl
system(`date '+ %Y/%m/%d-%H:%M:%S' >ex.txt`);
open(MYINPUTFILE, "/tmp/ranjan/ex.txt");
while(<MYINPUTFILE>)
{
chomp;
print "$_\n";
}
close(MYINPUTFILE);
output:
2013/07/29-18:58:04
I want to add two minutes to the time and need to replace the time present in a file, Pls give me some ideas.

Change your date command to add the 2 minutes:
date --date "+2 min" '+ %Y/%m/%d-%H:%M:%S'
or a Perl version:
use POSIX;
print strftime("%Y/%m/%d-%H:%M:%S", localtime(time + 120));

It is best to use Time::Piece to do the parsing and formatting of dates. It is a built-in module and shoudln't need installation.
Unusually, in this case the replacement date/time string is exactly the same length as the original string read from the file, so the modification can be done in-place. Normally the overall length of a file changes, so it is necessary either to create a new file and delete the old one, or to read the entire file into memory and write it out again.
This program opens the file for simultaneous read/write, reads the first line from the file, parses it using Time::Piece, adds two minutes (120 seconds), seeks to the start of the file again, and prints the new date/time reformatted in the same way as the original back to the file.
use strict;
use warnings;
use autodie;
use Time::Piece;
my $format = '%Y/%m/%d-%H:%M:%S';
open my $fh, '+<', 'ex.txt';
my $date_time = <$fh>;
chomp $date_time;
$date_time = Time::Piece->strptime($date_time, $format);
$date_time += 60 * 2;
seek $fh, 0, 0;
print $fh $date_time->strftime($format);
close $fh;
output
2013/07/29-19:00:04

Related

Convert date from file in Perl

On my program, I need to open my text file. Next I need to find date and convert. I don't know how to convert date with my code. Now in result I have mm / dd / yyyy. I'd like to change it to dd-mm-yyyy. Is it possible to do it with my code. How should I fix it?
#my program
use strict;
use warnings;
use feature 'say';
open my $file, 'file.txt' or die "Error\n";
my $re = qr/\d{1,2}\/\d{1,2}\/\d{1,4}/i; #mm/dd/yyyy
#my $re = qr/\d{1,2}-\d{1,3}/i; #postcode
while(my $fh = <$file>) {
if (my #match = $fh =~ /$re/g){;
say for #match;
}
}
#my file.txt
Today is 03.02.2020. Tommorow will be 03.03.2020.
To get the results of a match captured, you need to write parentheses around the parts you want to keep:
my $re = qr/(\d{1,2})\/(\d{1,2})\/(\d{1,4})/i; #mm/dd/yy
Also, your sample file.txt separates the components with dots and not slashes, so you need either change the file or the regular expression.
If the goal to change date format in file then this can be achieved with following one liner
perl -0777 -pe "s/\b(\d{2})\.(\d{2})\.(\d{2,4})\b/$1-$2-$3/g" -i file.txt
Options:
-0777 read file at once
-pe execute script in "..." and print out result
-i do replacement in place (original file modified to desired result)
Note:
MS Windows assume the script to be wrapped into "perl script", UNIX systems assume the script to be wrapped into 'perl script'.
\b can be omitted if there is not strict requirements for date be separated by punctuation.
Attention:
Please make a copy of original file just in case you make a mistake.
Goal:
substitute in file all dates represented in form 'dd.mm.yyyy' to 'dd-mm-yyyy'
Errors:
In OP's code regex is tuned to date format 'dd/mm/yyyy' what does not match data in input file 'dd.mm.yyyy'
Procedure:
open file, read file line by line, substitute date in 'dd.mm.yyyy' format to 'dd-mm-yyyy' format in each line all occurrances, output result to console, close the file, done
Note:
OP outputs date by one digit on each line to console -- definitely not what he intended [corrected], to make in place substitute in input file the code should be modified or see one liner provided.
use strict;
use warnings;
use feature 'say';
# Uncomment bellow to read from file
# my $filename = 'file.txt';
# open my $file, '<', $filename
# or die "Error\n";
# NOTE:
# In provided file you have dates in following format
# 03.02.2020
# but in question you refer to
# 03/02/2020
# The code adjusted accordingly, substitution applies
# only to dates matching re format in line (no need for 'if')
while(<DATA>) { # replace DATA with $file to read from a file
chomp; # snip eol
s/(\d{1,2})\.(\d{1,2})\.(\d{1,4})/$1-$2-$3/g; # substitute all date 0ccurrences in line
say;
}
__DATA__
#my file.txt
Today is 03.02.2020. Tommorow will be 03.03.2020.
Output
#my file.txt
Today is 03-02-2020. Tommorow will be 03-03-2020.

Perl script which will do the Global interpolation of number and timestamps

I'm trying to write a perl script which will do the Global interpolation of number and timestamps (timestamps format YYYYmmDDHHMMSS, e.g. 20150124010502) with system time. The format remain the same as in the original file and is reduced by one minute in each file.
The input files would be file01.txt, file02.txt , file03.txt, file04.txt and so on. All the file have the same number and timestamps and size.
4947000219, 20150124010502 ,2
In our output file we want to replace the existing number and timestamps. The number needs to increment and the timestamps should be replaced with system time and formated like in the original file.
Assuming the system time “Mon Jan 19 13:39:57 IST 2015” our replaced timestamps is 20150119133957 and the minute will be reduced by one minute in each file.
The output file should look like this:
file01.txt 4947000219, 20150119133957 ,2
file02.txt 4947000220, 20150119133857 ,2
file03.txt 4947000221, 20150119133757 ,2
file04.txt 4947000222, 20150119133657 ,2
file05.txt 4947000223, 20150119133557 ,2
file06.txt 4947000224, 20150119133457 ,2
file07.txt 4947000225, 20150119133357 ,2
file07.txt 4947000226, 20150119133257 ,2
.
.
.
.
and so on.
Below is the perl script we created. But it's not working.
#!/usr/bin/perl
use strict;
use File::Find;
use Time::Local;
use POSIX ();
my $n;
my #local=(localtime);
my $directory= "/home/Doc/Test";
chdir $directory;
opendir(DIR, ".") or die "couldn't open $directory: $!\n";
foreach my $file (readdir DIR){
next unless -f $file;
open my $in_fh, "<$file";
my #lines = <$in_fh>;
close $in_fh;
my $date = POSIX::strftime( '%Y%m%d%H%M%S', #local);
++$n;
$lines[0] =~ s~/(4947000219)/~$1+$n~ge;
$lines[1] =~ s~/(20140924105028)/~$date-$n~ge;
open my $out_fh, ">$file";
print $out_fh #lines;
close $out_fh;
}
closedir DIR;
Can anyone tell me, what's wrong?
You are trying to subtract a integer from a date string to reduce the number of minutes:
$lines[1] =~ s~/(20140924105028)/~$date-$n~ge;
That won't work. Instead, substract 60 seconds from the time parameter given to localtime and use strftime again for every file.
This started out as a comment but the limited code formatting possibilities made it an answer instead.
I am not sure i understand you correctly. Is there a number of input files in the same format as the output file? Because else i do not understand at all where you get the timestamp and number from. I will assume your input files look just like the output and you only want to change some numbers around. If that is not true, i still believe that my second point might help you out, the example however will not.
If i get your problem correctly, i would assume your problem are the s~~~ge.
First, you only replace a number in the first line and a timestamp in the second. It appears you lost a loop somewhere (there even is the indentation) and got confused about what #lines is. So first of all you need a loop over all your lines.
Second, from your example input it looks as if there are in fact no slashes. Your replacement however looks for a specific number in between slashes, removing those slashes in the process. I would assume the slashes are left overs from a matching operator you copied or something. But as your replacement operator uses ~ for separation, those slashes are literal.
Third you look for a specific number when the whole point of regular expressions is being better than search&replace.
If i am not mistaken, you look for something along the lines of
foreach my $currentLine (#lines) {
++$n;
$currentLine =~ s~(\S*) (\d*), (\d*)~print "$1 ", $2+$n, ", ", $date-$n~ge;
#any number of non-spaces# #any number of digits#, #any number of digits#
}

Read specific part of a filehandle in PERL

Hi I have a large file I would like to read. To save resource I want to read it slowly, one line at a time. However I'm wondering if there is a way to read specific line from a filehandle instead. For example, say I have a test.txt file containing a billion numbers starting with 1. Each number is on a separate line.
1
2
3
...
so now what I currently do to get say line 10 is this,
open (FILE, "< test.txt") or die "$!";
#reads = <FILE>
print $reads[9];
however, is there a way I can access certain part of the FILE without reading everything into a big array, say I want line 10 instead.
something like FILE->[9]
-
thanks for helping in advance!
Two methods, do line by line processing your skip to the desired line. You can use the Input Line Number variable, $. to help:
use strict;
use warnings;
use autodie;
my $line10 = sub {
open my $fh, '<', 'text.txt';
while (<$fh>) {
return $_ if $. == 10;
}
}->();
Alternatively, you could use Tie::File as you already noticed. However, while that interface is very convenient, and I'd recommend it's use, it also will loop through the file behind the scenes.
use strict;
use warnings;
use autodie;
use Tie::File;
tie my #array, 'Tie::File', 'text.txt' or die "Can't open text.txt: $!";
print $array[9] // die "Line 10 does not exist";
For memory purposes large files should be read in using a while loop which will read the file line by line:
open my $fh, '<', 'somefile.txt';
while ( my $line = <$fh> ) {
//read in text line by line
}
Either way to get at that line number you are going to have to read the whole file in. Now I would recommend using the while loop and a counter to print / save the line you are looking for.

Saving Data that's Been Run Through ActivePerl

This must be a basic question, but I can't find a satisfactory answer to it. I have a script here that is meant to convert CSV formatted data to TSV. I've never used Perl before now and I need to know how to save the data that is printed after the Perl script runs it though.
Script below:
#!/usr/bin/perl
use warnings;
use strict;
my $filename = data.csv;
open FILE, $filename or die "can't open $filename: $!";
while (<FILE>) {
s/"//g;
s/,/\t/g;
s/Begin\.Time\.\.s\./Begin Time (s)/;
s/End\.Time\.\.s\./End Time (s)/;
s/Low\.Freq\.\.Hz\./Low Freq (Hz)/;
s/High\.Freq\.\.Hz\./High Freq (Hz)/;
s/Begin\.File/Begin File/;
s/File\.Offset\.\.s\./File Offset (s)/;
s/Random.Number/Random Number/;
s/Random.Percent/Random Percent/;
print;
}
All the data that's been analyzed is in the cmd prompt. How do I save this data?
edit:
thank you everyone! It worked perfectly!
From your cmd prompt:
perl yourscript.pl > C:\result.txt
Here you run the perl script and redirect the output to a file called result.txt
It's always potentially dangerous to treat all commas in a CSV file as field separators. CSV files can also include commas embedded within the data. Here's an example.
1,"Some data","Some more data"
2,"Another record","A field with, an embedded comma"
In your code, the line s/,/\t/g treats all tabs the same and the embedded comma in the final field will also be expanded to a tab. That's probably not what you want.
Here's some code that uses Text::ParseWords to do this correctly.
#!/usr/bin/perl
use strict;
use warnings;
use Text::ParseWords;
while (<>) {
my #line = parse_line(',', 0, $_);
$_ = join "\t", #line;
# All your s/.../.../ lines here
print;
}
If you run this, you'll see that the comma in the final field doesn't get updated.

Storing time series data, without a database

I would like to store time series data, such as CPU usage over 6 Months (Will poll the CPU usage every 2 minutes, so later I can get several resolutions, such as - 1 Week, 1 Month, or even higher resolutions, 5 Minutes,etc).
I'm using Perl, and I dont want to use RRDtool or relational database, I was thinking of implementing my own using some sort of a circular buffer (ring buffer) with the following properties:
6 Months = 186 Days = 4,464 Hours = 267,840 Minutes.
Dividing it into 2 minutes sections: 267,840 / 2 = 133,920.
133,920 is the ring-buffer size.
Each element in the ring-buffer will be a hashref with the key as the epoch (converted easily into date time using localtime) and the value is the CPU usage for that given time.
I will serialize this ring-buffer (using Storable I guess)
Any other suggestions?
Thanks,
I suspect you're overthinking this. Why not just use a flat (e.g.) TAB-delimited file with one line per time interval, with each line containing a timestamp and the CPU usage? That way, you can just append new entries to the file as they are collected.
If you want to automatically discard data older than 6 months, you can do this by using a separate file for each day (or week or month or whatever) and deleting old files. This is more efficient than reading and rewriting the entire file every time.
Writing and parsing such files is trivial in Perl. Here's some example code, off the top of my head:
Writing:
use strict;
use warnings;
use POSIX qw'strftime';
my $dir = '/path/to/log/directory';
my $now = time;
my $date = strftime '%Y-%m-%d', gmtime $now; # ISO 8601 datetime format
my $time = strftime '%H:%M:%S', gmtime $now;
my $data = get_cpu_usage_somehow();
my $filename = "$dir/cpu_usage_$date.log";
open FH, '>>', $filename
or die "Failed to open $filename for append: $!\n";
print FH "${date}T${time}\t$data\n";
close FH or die "Error writing to $filename: $!\n";
Reading:
use strict;
use warnings;
use POSIX qw'strftime';
my $dir = '/path/to/log/directory';
foreach my $filename (sort glob "$dir/cpu_usage_*.log") {
open FH, '<', $filename
or die "Failed to open $filename for reading: $!\n";
while (my $line = <FH>) {
chomp $line;
my ($timestamp, $data) = split /\t/, $line, 2;
# do something with timestamp and data (or save for later processing)
}
}
(Note: I can't test either of these example programs right now, so they might contain bugs or typos. Use at your own risk!)
As #Borodin suggests, use SQLite or DBM::Deep as recommended here.
If you want to stick to Perl itself, go with DBM::Deep:
A unique flat-file database module, written in pure perl. ... Can handle millions of keys and unlimited levels without significant slow-down. Written from the ground-up in pure perl -- this is NOT a wrapper around a C-based DBM. Out-of-the-box compatibility with Unix, Mac OS X and Windows.
You mention your need for storage, which could be satisfied by a simple text file as advocated by #llmari. (And, of course, using a CSV format would allow the file to be manipulated easily in a spreadsheet.)
But, if you plan on collecting a lot of data, and you wish to eventually be able to query it with good performance, then go with a tool designed for that purpose.