How can I send clear or reset with Term::Cap? - perl

When I output tput clear | hexdump -c I get different results if I'm on kitty or xterm. How can I use Term::Cap to generate these terminal signals on the respective terminal?
What I've tried is a direct-copy-paste from the docs with setup,
use strict;
use warnings;
use Term::Cap;
use POSIX;
my $termios = new POSIX::Termios;
$termios->getattr;
my $ospeed = $termios->getospeed;
my $terminal = Tgetent Term::Cap { TERM => undef, OSPEED => $ospeed };
And then I thought this should work,
$terminal->Tputs('clear', 1, *STDOUT );
But alas, it does nothing.
If I provide a different non-existent name for the term (rather than undef which defaults to $ENV{TERM}, I get)
Can't find a valid termcap file at ./test.pl line 9.
So I know it's looking up the termcap file, and finding it.

Right way
All of termcap's signal names are two letters. For clear you'll want cl
$terminal->Tputs('cl', 1, *STDOUT );
You can find the full list on man termcap, which has a full list:
ch Move cursor horizontally only to column %1
cl Clear screen and cursor home
cm Cursor move to row %1 and column %2 (on screen)
Thanks to Thomas Dickey for the answer in comments
Wrong way
Not sure what I was doing wrong, as a temporary work around I did
use constant CLEAR => do {
open( my $fh, '-|', qw(tput clear) );
scalar <$fh>;
};
This still has a spin up another process, but it worked fine. I won't accept this answer in the event anyone knows how to do this the right way.

Related

Set a filehandle so that prints to it are quietly skipped?

This strange interest comes from expanding requirements and no time to change design (refactor). This is not good design, sure, but I need to deal with it now and hope to refactor later.
There are a few log files opened early on which are printed to throughout code. The new requirement implies that with a (new) command-line option (--noflag) one of these log files is irrelevant.
All I could do at the moment is to pad the definition (open my $fh, ...) and all uses of it (print $fh ...) with if $flag. This is clearly bad design and it is error prone (it isn't pretty either).
Is there a way to do something with $fh when it is associated with the file
so that any following print $fh ... is accepted by intepreter but will result in simply not running the print, without error? (Let me imagine something like, say, $fh = VOID if $flag;.) Or, is there some NULL stream or such? All I know of are STDOUT (1), STDERR (2), and STDIN (0).
I do not want $fh to print anywhere else, ideally not even to /dev/null (if that is possible?). I did look around and couldn't find anything related. I'd appreciate being pointed to information if in fact it is out there already.
Any ideas are appreciated.
PS. First question ever asked here (after years of using SO), please let me know if it's off.
UPDATE
Thanks for responses. They prompt me to add to/refine this question: Are prints marked to go to /dev/null possibly optimized, so that the 'printing' actually doesn't happen? (While I am still interested in whether it is possible to set a filehandle so to tell to Perl 'do not print here'.)
I am trying to avoid running void (print) statements, without adding conditionals.
Update/Clarification
To summarize a bit from comments (thank you!): This was not a quest for performance optimization. I completely agree with everything said in comments on this. It is simply that executing pointless statements (typically around a million) makes me uneasy. Also, I was curious about some possible dark corner of Perl that I haven't run into. (Most of this has been addressed in answers/comments.)
If you are on a unix operating system you can use '/dev/null'
open my $fh, '>', '/dev/null' or die 'This should never happen';
Dev null will silently accept all input.
Closing your filehandle
close $fh;
will make all your prints to that file handle fail. Run
no warnings 'closed';
to suppress all the warning messages that would generate (you do use warnings, right?)
Through magic, you could create a magical handle for which operations are always successful.
perl -e'
{
package Handle::Dummy;
use Tie::Handle qw( );
use Symbol qw( gensym );
our #ISA = qw( Tie::Handle );
sub new { my $fh = gensym; tie *$fh, $_[0]; $fh }
sub TIEHANDLE { bless(\my $dummy, $_[0]) }
sub READ { return 1; }
sub WRITE { return 1; }
sub CLOSE { return 1; }
}
my $fh = Handle::Dummy->new();
print($fh "abc\n") or die $!;
close($fh) or die $!;
print("ok\n");
'
ok
That avoids the systems calls, but it replaces them with expensive Perl subroutine calls.
It's far simpler and more reliable[1] to simply use /dev/null. It could very well be faster too.
Are prints marked to go to /dev/null possibly optimized
No. Perl doesn't know anything about /dev/null.
How slow do you think a system call is? This doesn't sound like the right thing to optimize!
The magical file handle is not associated with a system file handle, so it can't be passed to a C library, it won't survive exec, etc.
You can use an anonymous, temporary file (about a quarter of the way down the perldoc page) like so;
#!/usr/bin/env perl
use strict;
use Getopt::Long;
my $fh;
my $need_log = 2;
print "Intitial need_log: $need_log\n";
GetOptions('flag!' => \$need_log);
print "After option processing, need_log: ", $need_log, "\n";
if ($need_log) {
open($fh, '>', "log.txt") or die "Failed to open log: $!\n";
}
else {
open($fh, '>', undef);
}
print $fh "Hello World... NOT\n";
exit 0;
Here is a few runs with different use of the --flag option;
User#Ubuntu:~$ ls -l log.txt
ls: cannot access log.txt: No such file or directory
User#Ubuntu:~$ ./nf.pl
Intitial need_log: 2
After option processing, need_log: 2
User#Ubuntu:~$ cat log.txt
Hello World... NOT
User#Ubuntu:~$ rm log.txt
User#Ubuntu:~$
User#Ubuntu:~$
User#Ubuntu:~$ ./nf.pl --flag
Intitial need_log: 2
After option processing, need_log: 1
User#Ubuntu:~$ cat log.txt
Hello World... NOT
User#Ubuntu:~$ rm log.txt
User#Ubuntu:~$
User#Ubuntu:~$
User#Ubuntu:~$ ./nf.pl --noflag
Intitial need_log: 2
After option processing, need_log: 0
User#Ubuntu:~$ cat log.txt
cat: log.txt: No such file or directory
User#Ubuntu:~$
I've initialized the $need_log variable to '2' so that we can tell if it has a 'True' value as a result of the flag option being present (in which case it will have the value 1) or as a result of no mention of the flag option at all (in which case it will have the value 2).
Specifying '--noflag' triggers the else clause which has 'undef' as the third argument which creates the anonymous temporary file. This doesn't perfectly match your question of not writing at all, but if the file is temporary and you're not putting gigabytes in it, this will hopefully suffice.

How do I influence the width of Perl IPC::Open3 output?

I have the following Perl code and would like it to display exactly as invoking /bin/ls in the terminal would display. For example on a terminal sized to 100 columns, it would print up to 100 characters worth of output before inserting a newline. Instead this code prints 1 file per line of output. I feel like it involves assigning some terminal settings to the IO::Pty instance, but I've tried variations of that without luck.
UPDATE: I replaced the <$READER> with a call to sysread hoping the original code might just have a buffering issue, but the output received from sysread is still one file per line.
UPDATE: I added code showing my attempt at changing the IO::Pty's size via the clone_winsize_from method. This didn't result in the output being any different.
UPDATE: As best I can tell (from reading IPC::open3 code for version 1.12) it seems you cannot pass a variable of type IO::Handle without open3 creating a pipe rather than dup'ing the filehandle. This means isatty doesn't return a true value when ls invokes it and ls then forces itself into "one file per line" mode.
I think I just need to do a fork/exec and handle the I/O redirection myself.
#!/usr/bin/env perl
use IPC::Open3;
use IO::Pty;
use strict;
my $READER = IO::Pty->new();
$READER->slave->clone_winsize_from(\*STDIN);
my $pid = open3(undef, $READER, undef, "/bin/ls");
while(my $line = <$READER>)
{
print $line;
}
waitpid($pid, 0) or die "Error waiting for pid: $!\n";
$READER->close();
I think $READER is getting overwritten with a pipe created by open3, which can be avoided by changing
my $READER = ...;
my $pid = open3(undef, $READER, undef, "/bin/ls");
to
local *READER = ...;
my $pid = open3(undef, '>&READER', undef, "/bin/ls");
See the docs.
You can pass the -C option to ls to force it to use columnar output (without getting IO::Pty involved).
The IO::Pty docs describe a clone_winsize_from(\*FH) method. You might try cloning your actual pty's dimensions.
I see that you're setting up the pty only as stdout of the child process. You might need to set it up also as its stdin — when the child process sends the "query terminal size" escape sequence to its stdout, it would need to receive the response on its stdin.

terminal: where am I?

Is there a variable or a function, which can tell me the actual position of the cursor?
#!/usr/bin/env perl
use warnings;
use 5.012;
use Term::ReadKey;
use Term::Cap;
use POSIX;
my( $col, $row ) = GetTerminalSize();
my $termios = new POSIX::Termios;
$termios->getattr;
my $ospeed = $termios->getospeed;
my $terminal = Tgetent Term::Cap { TERM => undef, OSPEED => $ospeed };
# some movement ...
# at which position (x/y) is the cursor now?
You could use curses instead. It has getcurx() and getcurx(). There is a CPAN module for it (and the libcurses-perl package in Debian or Ubuntu).
I don't think you can determine the cursor position using termcap.
The termutils manual says:
If you plan to use the relative cursor motion commands in an application program, you must know what the starting cursor position is. To do this, you must keep track of the cursor position and update the records each time anything is output to the terminal, including graphic characters.
Some terminals may support querying the position, as CSI 6 n. If supported, the position will be reported as CSI Pl;Pc R. For example
$ echo -e "\e[6n"; xxd
^[[4;1R
0000000: 1b5b 343b 3152 0a .[4;1R.
This reports the cursor as being at the 1st column of the 4th line (counting from 1).
However, this probably ought not be relied upon, as not very many terminals actually support this.
Printing ESC[6n at ANSI compatible terminals will give you the current cursor position as ESC[n;mR, where n is the row and m is the column
So try reading it with terminal escape characters. Something like that:
perl -e '$/ = "R";' -e 'print "\033[6n";my $x=<STDIN>;my($n, $m)=$x=~m/(\d+)\;(\d+)/;print "Current position: $m, $n\n";'

Escape whitespace when using backticks

I've had a search around, and from my perspective using backticks is the only way I can solve this problem. I'm trying to call the mdls command from Perl for each file in a directory to find it's last accessed time. The issue I'm having is that in the file names I have from find I have unescaped spaces which bash obviously doesn't like. Is there an easy way to escape all of the white space in my file names before passing them to mdls. Please forgive me if this is an obvious question. I'm quite new to Perl.
my $top_dir = '/Volumes/hydrogen/FLAC';
sub wanted { # Learn about sub routines
if ($File::Find::name) {
my $curr_file_path = $File::Find::name. "\n";
`mdls $curr_file_path`;
print $_;
}
}
find(\&wanted, $top_dir);
If you are JUST wanting "last access time" in terms of of the OS last access time, mdls is the wrong tool. Use perl's stat. If you want last access time in terms of the Mac registered application (ie, a song by Quicktime or iTunes) then mdls is potentially the right tool. (You could also use osascript to query the Mac app directly...)
Backticks are for capturing the text return. Since you are using mdls, I assume capturing and parsing the text is still to come.
So there are several methods:
Use the list form of system and the quoting is not necessary (if you
don't care about the return text);
Use String::ShellQuote to escape the file name before sending to sh;
Build the string and enclose in single quotes prior to sending to sending to the shell. This is harder than it sounds because files names with single quotes defeats your quotes! For example, sam's song.mp4 is a legal file name, but if you surround with single quotes you get 'sam's song.mp4' which is not what you meant...
Use open to open a pipe to the output of the child process like this: open my $fh, '-|', "mdls", "$curr_file" or die "$!";
Example of String::ShellQuote:
use strict; use warnings;
use String::ShellQuote;
use File::Find;
my $top_dir = '/Users/andrew/music/iTunes/iTunes Music/Music';
sub wanted {
if ($File::Find::name) {
my $curr_file = "$File::Find::name";
my $rtr;
return if -d;
my $exec="mdls ".shell_quote($curr_file);
$rtr=`$exec`;
print "$rtr\n\n";
}
}
find(\&wanted, $top_dir);
Example of pipe:
use strict; use warnings;
use String::ShellQuote;
use File::Find;
my $top_dir = '/Users/andrew/music/iTunes/iTunes Music/Music';
sub wanted {
if ($File::Find::name) {
my $curr_file = "$File::Find::name";
my $rtr;
return if -d;
open my $fh, '-|', "mdls", "$curr_file" or die "$!";
{ local $/; $rtr=<$fh>; }
close $fh or die "$!";
print "$rtr\n\n";
}
}
find(\&wanted, $top_dir);
If you're sure the filenames don't contain newlines (either CR or LF), then pretty much all Unix shells accept backslash quoting, and Perl has the quotemeta function to apply it.
my $curr_file_path = quotemeta($File::Find::name);
my $time = `mdls $curr_file_path`;
Unfortunately, that doesn't work for filenames with newlines, because the shell handles a backslash followed by a newline by deleting both characters instead of just the backslash. So to be really safe, use String::ShellQuote:
use String::ShellQuote;
...
my $curr_file_path = shell_quote($File::Find::name);
my $time = `mdls $curr_file_path`;
That should work on filenames containing anything except a NUL character, which you really shouldn't be using in filenames.
Both of these solutions are for Unix-style shells only. If you're on Windows, proper shell quoting is much trickier.
If you just want to find the last access time, is there some weird Mac reason you aren't using stat? When would it be worse than kMDItemLastUsedDate?
my $last_access = ( stat($file) )[8];
It seems kMDItemLastUsedDate isn't always updated to the last access time. If you work with a file through the terminal (e.g. cat, more), kMDItemLastUsedDate doesn't change but the value that comes back from stat is right. touch appears to do the right thing in both cases.
It looks like you need stat for the real answer, but mdls if you're looking for access through applications.
You can bypass the shell by expressing the command as a list, combined with capture() from IPC::System::Simple:
use IPC::System::Simple qw(capture);
my $output = capture('mdls', $curr_file_path);
Quote the variable name inside the backticks:
`mdls "$curr_file_path"`;
`mdls '$curr_file_path'`;

getting started with vim scripting with perl

I'd like to create a vim function/command to insert an XSD style timestamp. Currently, I use the following in my vimrc file:
nmap <F5> a<C-R>=strftime("%Y-%m-%dT%H:%M:%S-07:00")<CR><Esc>
I'd like to use the Perl code:
use DateTime;
use DateTime::Format::XSD;
print DateTime->now(formatter => 'DateTime::Format::XSD', time_zone => 'America/Phoenix');
But I don't know where to start. I'm aware that I can define a function that uses Perl. Example:
function PerlTest()
perl << EOF
use DateTime;
use DateTime::Format::XSD;
print DateTime->now(formatter => 'DateTime::Format::XSD', time_zone => 'America/Phoenix');
EOF
But when I changed my vimrc to the following, I didn't get what I expected:
nmap <F5> a<C-R>=PerlTest()<CR><Esc>
Could someone point me in the right direction for implementing this? This is the first time I've tried to write functions in vim. Also, I'm using vim 7.2 compiled with perl support.
First off, you'll want to take a look at :help if_perl for the general information about using Perl from within Vim.
For this specific question, I don't think the same approach of entering insert mode and evaluating an expression is the best option. It doesn't look like the language bindings have a way to return a value like that.
What you can do instead is to have the function get the current line, put the time string at the appropriate place, and set the current line again.
fun! PerlTest()
perl << EOF
use DateTime;
use DateTime::Format::XSD;
my ($row, $col) = $curwin->Cursor();
my ($line) = $curbuf->Get($row);
substr($line, $col + 1, 0,
DateTime->now(formatter => 'DateTime::Format::XSD',
time_zone => 'America/Phoenix'));
$curbuf->Set($row, $line);
EOF
endfun
Then your map would simply be nnoremap <F5> :call PerlTest()<CR>.
One issue I've noticed with the above is that it doesn't work well if the line contains characters where 1 byte != 1 column (i.e., tabs, multi-byte characters, etc.). I've played with various ways of trying to fix that, but none of them seem to work very well.
The problem is that there's no easy way to map from Vim's cursor position to a position in the string that represents the cursor's current line.
A different approach, which avoids this problem, is to just use the Perl interface to get the data and then paste the data from Vim.
fun! PerlTest()
let a_reg = getreg('a', '1')
let a_reg_type = getregtype('a')
perl << EOF
use DateTime;
use DateTime::Format::XSD;
my $date = DateTime->now(formatter => 'DateTime::Format::XSD',
time_zone => 'America/Phoenix');
VIM::Eval("setreg('a', '$date', 'c')");
EOF
normal "ap
call setreg('a', a_reg, a_reg_type)
endfun
nnoremap <F5> :call PerlTest()<CR>
If you're happy with running an external Perl script you can try this:
:map <F5> :let #a = system("perl script.pl")<cr>"ap
this runs the command perl script.pl (adjust depending on paths), captures its output in register #a and pastes it at cursor position.