How to find the age of the file in SFTP using perl? - perl

I am connecting SFTP and downloading the file using perl. I want to download the file that is created/modified 1 hours back.
The below is code snippet.
use strict;
use Net::SFTP::Foreign;
my $sftp_conn=Net::SFTP::Foreign->new('test.sftp.com',user=>'test',password=>'test123');
my $a1 = Net::SFTP::Foreign::Attributes->new();
my $a2 = $sftp_conn->stat('/inbox/tested.txt') or die "remote stat command failed: ".$sftp_conn->status;
$sftp_conn->get("/inbox/tested.txt","/tmp"));
Here I want to check the file age at what time it modified and calculate in hours.

You are on the right track. Calling ->stat on the connection object returns a Net::SFTP::Foreign::Attributes object. You can then call ->mtime on it to get the modifcation time.
my $attr = $sftp_conn->stat('/inbox/tested.txt')
or die "remote stat command failed: ".$sftp_conn->status;
print $attr->mtime;
There is no need to create an empty object first. You don't need the following line. You probably copied it from the SYNOPSIS in the docs, but that's just to show different ways of using that module. You can delete it.
my $a1 = Net::SFTP::Foreign::Attributes->new();
I don't know which format the mtime will be in, so I can't tell you how to do the comparison. There is nothing about that in the docs, in the code of the module or in the tests.
A quick google suggested "YYYYMMDDhhmmss", but that might not be the right one. Just try it. If it's a unix timestamp, you can just compare it to time or time - 3600, but if it's a string, you will need to parse it. Time::Piece is a useful module that comes with the Perl core to do that.

Related

create a new variablename in properties file at each run of perlScript

my variable inside the properties file is $starttime and the value is current date in YYYYMMDDHH24MI after i run second time a new variable with $stattime_2 with current date value.
my code is
#!/usr/local/bin/perl
use Time::Piece;
$starttime = localtime->strftime('%Y%m%d%H%M');
$i = 0;
open my $file, '>', 'order.properties' or die $!;
print $file "Start_time", $i, " = ", $starttime;
close $file;
for each run the order.properties file should update like
at first time
Start_time_1 = 2018121317:04(the current system Time)
at second time
Start_time_2 = 2018121317:05.........
3rd,4th,5th the variable name should change and current date and time should assign
OUTPUT will be like
at 3rd run
Start_time_1 = 2018121317:04
Start_time_2 = 2018121317:05
Start_time_3 = 2018121317:09
How may execution of Script equal to the entries of start time in the properties file
I'm not going to give you a complete answer as you'll learn more by working it out for yourself. But I will point out the two things you'll need to fix.
You open your file using >, which overwrites the file each time you run your program. You need to, instead, use "append" mode, which adds new data to the end of your file. You do that by using >> instead of >.
You also need to work out which number gets appended to Start_time. Obviously, your program closes down each time it finishes, so you can't store it as a variable. I would suggest that the easiest approach is probably to count the lines that are currently in the file before writing your new lines.
Two more pieces of advice. The Perl FAQ is a great source of Perl programming advice and you should always have use strict and use warnings in your Perl programs.

perl script to read an excel file using cpan modules

I am writing a perl script to read data from an excel file. The script is being written in an unix environment and run on the server, whereas the excel file is available on my Desktop in Windows.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use Spreadsheet::Read;
my $workbook = ReadData ("C:/Users/tej/Desktop/Work.xlsx");
say $workbook->[1]{A1};
The output gives out a warning saying
Use of uninitialized value in say at..... line 10
And there is no other output being printed. I just wrote a sample code to read the A1 cell value from sheet 1. Later, I need to write a logic to read particular values. For right now, need to fix the error to read and print the excel cell values. Appreciate any help. :)
I fixed the issue. It was about file was not being accessed. I used samba to Map unix disk to Windows Network Drive. But now, I get a different error which says : Parser for XLSX is not installed at.. Can someone help me to resolve it.

Change output filename from WGET when using input file option

I have a perl script that I wrote that gets some image URLs, puts the urls into an input file, and proceeds to run wget with the --input-file option. This works perfectly... or at least it did as long as the image filenames were unique.
I have a new company sending me data and they use a very TROUBLESOME naming scheme. All files have the same name, 0.jpg, in different folders.
for example:
cdn.blah.com/folder/folder/202793000/202793123/0.jpg
cdn.blah.com/folder/folder/198478000/198478725/0.jpg
cdn.blah.com/folder/folder/198594000/198594080/0.jpg
When I run my script with this, wget works fine and downloads all the images, but they are titled 0.jpg.1, 0.jpg.2, 0.jpg.3, etc. I can't just count them and rename them because files can be broken, not available, whatever.
I tried running wget once for each file with -O, but it's embarrassingly slow: starting the program, connecting to the site, downloading, and ending the program. Thousands of times. It's an hour vs minutes.
So, I'm trying to find a method to change the output filenames from wget without it taking so long. The original approach works so well that I don't want to change it too much unless necessary, but i am open to suggestions.
Additional:
LWP::Simple is too simple for this. Yes, it works, but very slowly. It has the same problem as running individual wget commands. Each get() or get_store() call makes the system re-connect to the server. Since the files are so small (60kB on average) with so many to process (1851 for this one test file alone) that the connection time is considerable.
The filename i will be using can be found with /\/(\d+)\/(\d+.jpg)/i where the filename will simply be $1$2 to get 2027931230.jpg. Not really important for this question.
I'm now looking at LWP::UserAgent with LWP::ConnCache, but it times out and/or hangs on my pc. I will need to adjust the timeout and retry values. The inaugural run of the code downloaded 693 images (43mb) in just a couple minutes before it hung. Using simple, I only got 200 images in 5 minutes.
use LWP::UserAgent;
use LWP::ConnCache;
chomp(#filelist = <INPUTFILE>);
my $browser = LWP::UserAgent->new;
$browser->conn_cache(LWP::ConnCache->new());
foreach(#filelist){
/\/(\d+)\/(\d+.jpg)/i
my $newfilename = $1.$2;
$response = $browser->mirror($_, $folder . $newfilename);
die 'response failure' if($response->is_error());
}
LWP::Simple's getstore function allows you to specify a URL to fetch from and the filename to store the data from it in. It's an excellent module for many of the same use cases as wget, but with the benefit of being a Perl module (i.e. no need to outsource to the shell or spawn off child processes).
use LWP::Simple;
# Grab the filename from the end of the URL
my $filename = (split '/', $url)[-1];
# If the file exists, increment its name
while (-e $filename)
{
$filename =~ s{ (\d+)[.]jpg }{ $1+1 . '.jpg' }ex
or die "Unexpected filename encountered";
}
getstore($url, $filename);
The question doesn't specify exactly what kind of renaming scheme you need, but this will work for the examples given by simply incrementing the filename until the current directory doesn't contain that filename.

How to run a local program with user input in Perl

I'm trying to get user input from a web page written in Perl and send it to a local program (blastp), then display the results.
This is what I have right now:
(input code)
print $q->p, "Your database: $bd",
$q->p, "Your protein is: $prot",
$q->p, "Executing...";
print $q->p, system("blastp","-db $bd","-query $prot","-out results.out");
Now, I've done a little research, but I can't quite grasp how you're supposed to do things like this in Perl. I've tried opening a file, writing to it, and sending it over to blastp as an input, but I was unsuccessful.
For reference, this line produces a successful output file:
kold#sazabi ~/BLAST/pataa $ blastp -db pataa -query ../teste.fs -out results.out
I may need to force the bd to load from an absolute path, but that shouldn't be difficult.
edit: Yeah, the DBs have an environmental variable, that's fixed. Ok, all I need is to get the input into a file, pass it to the command, and then print the output file to the CGI page.
edit2: for clarification:
I am receiving user input in $prot, I want to pass it over to blastp in -query, have the program blastp execute, and then print out to the user the results.out file (or just have a link to it, since blastp can output in HTML)
EDIT:
All right, fixed everything I needed to fix. The big problem was me not seeing what was going wrong: I had to install Tiny:Capture and print out stderr, which was when I realized the environmental variable wasn't getting set correctly, so BLAST wasn't finding my databases. Thanks for all the help!
Write $prot to the file. Assuming you need to do it as-is without processing the text to split it or something:
For a fixed file name (may be problematic):
use File::Slurp;
write_file("../teste.fs", $prot, "\n") or print_error_to_web();
# Implement the latter to print error in nice HTML format
For a temp file (better):
my ($fh, $filename) = tempfile( $template, DIR => "..", CLEANUP => 1);
# You can also create temp directory which is even better, via tempdir()
print $fh "$prot\n";
close $fh;
Step 2: Run your command as you indicated:
my $rc = system("$BLASTP_PATH/blastp", "-db", "pataa"
,"-query", "../teste.fs", "-out", "results.out");
# Process $rc for errors
# Use qx[] instead of system() if you want to capture
# standard output of the command
Step 3: Read the output file in:
use File::Slurp;
my $out_file_text = read_file("results.out");
Send back to web server
print $q->p, $out_file_text;
The above code has multiple issues (e.g. you need better file/directory paths, more error handling etc...) but should start you on the right track.

How can download via FTP all files with a current date in their name?

I have a file format which is similar to "IDY03101.200901110500.axf". I have about 25 similar files residing in an ftp repository and want to download all those similar files only for the current date. I have used the following code, which is not working and believe the regular expression is incorrect.
my #ymb_date = "IDY.*\.$year$mon$mday????\.axf";
foreach my $file ( #ymb_date)
{
print STDOUT "Getting file: $file\n";
$ftp->get($file) or warn $ftp->message;
}
Any help appreciated.
EDIT:
I need all the current days file.
using ls -lt | grep "Jan 13" works in the UNIX box but not in the script
What could be a vaild regex in this scenario?
It doesn't look like you're using any regular expression. You're trying to use the literal pattern as the filename to download.
Perhaps you want to use the ls method of Net::FTP to get the list of files then filter them.
foreach my $file ( $ftp->ls ) {
next unless $file =~ m/$regex/;
$ftp->get($file);
}
You might also like the answers that talk about implementing mget for "Net::FTP" at Perlmonks.
Also, I think you want the regex that finds four digits after the date. In Perl, you could write that as \d{4}. The ? is a quantifier in Perl, so four of them in a row don't work.
IDY.*\.$year$mon$mday\d{4}\.axf
I do not think regexes work like that. Though it has been awhile since I did Perl, so I could be way off there. :)
Does your $ftp object have access to an mget() method? If so, maybe try this?
$ftp->mget("IDY*.$year$mon$mday*.axf") or warn $ftp->message;