OTRS PostMasterMailBox.pl not a scalar reference - perl

I have just upgraded OTRS from 3 to 3.3. Before we retrieved emails from a mailbox, but it's not working now, the credentials has been verified multiple times and for all I can understand it connects just fine. We have not changed anything in the files, so we aren't quite sure what is up.
The error is:
"Not a SCALAR reference at /usr/share/perl15/Mail/IMAPClient.pm line 419"
This is the snippet of IMapClient.pm:419:
# give caller control of args to start_SSL if desired
my #sslargs =
( $self->Starttls and ref( $self->Starttls ) eq "ARRAY" )
? ( #${ $self->Starttls } )
: ( Timeout => 30 );
unless ( $ioclass->start_SSL( $sock, #sslargs ) ) {
$self->LastError( "Unable to start TLS: " . $ioclass->errstr );
return undef;
}
Edit: We had to do quick downgrade and that fixed the problem entirely, so I wonder if it's a bug in OTRS?

Have you checked the installed Perl Modules using otrs.CheckModules.pl?

Related

"sh: 1: file: not found" thrown in Perl

So this is an issue I see thrown around on several coding help-sites that always have a slight variation. I'm not entirely familiar with what it means, and what's even more curious is that this error is thrown midway through a larger Upload.pm script, and does not cause any sort of fatal error. It gets tossed into my error log somewhere during this unless conditional snippet
# If this is the first slice, validate the file extension and mime-type. Mime-type of following slices should be "application/octet-stream".
unless ( defined $response{'error'} ) {
if ( $slice->{'index'} == 1 ) {
my ($filename, $directory, $extension) = fileparse($path.$parent_file, qr/\.[^.]*/);
unless ( is_valid_filetype($slice->{'tmp_file'}, $extension) ) {
$response{'error'} = "Invalid file type.";
$response{'retry'} = 0;
}
}
}
Now, let me be perfectly honest. I don't really understand the error message, and I could really use some help understanding it, as well as solving it.
Our Perl based web app has refused to let us upload files correctly since upgrading to Debian Bullseye, and I've been stuck debugging this code I didn't write for a few days now. I'm wondering if the upgrade depreciated some Perl modules, or if the directories to said modules are no longer working?
I'm testing this in a Ubuntu based Docker environment running Debian Bullseye on an Apache 2 server.
If you need any more context, clarification, etc, please let me know.
is_valid_filetype() looks like this:
sub is_valid_filetype
{
my ($tmp_file, $extension) = #_;
if ( $tmp_file && $extension ) {
# Get temp file's actual mime-type.
my $mime = qx/file --mime-type -b '${tmp_file}'/;
$mime =~ s/^\s+|\s+$//g;
# Get valid mime-types matching this extension.
my $dbh = JobTracker::Common::dbh or die("DBH not available.");
my $mime_types = $dbh->selectrow_array('SELECT `mime_types` FROM `valid_files` WHERE `extension` = ?', undef, substr($extension, 1));
if ( $mime && $mime_types ) {
if ( $mime_types !~ /,/ ) {
# Single valid mime-type for this extension.
if ( $mime eq $mime_types ) {
return 1;
}
} else {
# Multiple valid mime-types for this extension.
my %valid_mimes = map { $_ => 1 } split(/,/, $mime_types);
if ( defined $valid_mimes{$mime} ) {
return 1;
}
}
}
}
return 0;
}
It's a message from sh (not Perl). It concerns an error on line 1 of the script, which was apparently an attempt to run the file utility. But sh couldn't find it.
The code in question executes this command using
qx/file --mime-type -b '${tmp_file}'/
Install file or adjust the PATH so it can be found.
Note that this code suffers from a code injection bug. It will fail if the string in $tmp_path contains a single quote ('), possibly resulting in the unintentional execution of code.
Fixed:
use String::ShellQuote qw( shell_quote );
my $cmd = shell_quote( "file", "--mime-type", "-b", $tmp_file" );
qx/$cmd/
Debian Bullseye was reading our CSV files as the wrong mime-type. It was interpreting the file command as application/csv, despite obviously not being an application.
This may be an actual bug in Bullseye, because both my boss and I have scoured the internet with no lucky finding anyone else with this issue. I may even report to Bullseye's devs for further awareness.
The fix was manually adding in our own mime-types that interpreted this file correctly.
It took us dumping the tmp directory to confirm the files existed, and triple checking I had my modules installed.
This was such a weird and crazy upstream issue that either of us could not have imaged it would be the file type interpretation at an OS level in Bullseye.
I really hope this helps someone, saves them the time it took us to find this.

Image request failing. Possible HTTPS issue

I'm working with some Perl code that I don't understand:
my $tmpdir = "XXX";
my $src = "tmp" . time . int rand 10 . ".jpg";
while ( -s "$tmpdir/$src" ) {
$src = "tmp" . time . int rand 10 . ".jpg";
}
my $ua = LWP::UserAgent->new( keep_alive => 0, timeout => 10 );
my $req = HTTP::Request->new(
"GET",
"https://www.com/act/couponimage/1877",
);
my $res = $ua->request( $req, "$tmpdir/$src" );
if ( ! $res || ! -s "$tmpdir/$src" ) {
header( 301, 0, 0, "https://www.com/" );
exit 0;
}
For some reason it hits the point where it redirects to the main page (the header code).
There is clearly something wrong here because it never executes past the last if clause which redirects. Maybe it has to do with using HTTPS?
It is not a filesystem issue. The following coupon image path works fine
Xxx.com/img/coupon-2600.jpg
The request should be working fine,
but as pointed out, you really should check what version of the packages you are using.
For me ( perl 5.22 - LWP::UserAgent 6.26 ) the call $ua->req( $req ) needs to be $ua->request( $req->uri ). And the file to store the response in must be given with $ua->request( $req->uri, :content_file => $fn ) .
I think the original author assumed the content file would be created by giving the filename to $ua->request. With :content_file, that is what's happening for me. He put the while ( -s $src ) in place to ensure he got a filename that was not yet taken.
Thus, I believe the solution to your problem is to change
my $res = $ua->request($req, "$tmpdir/$src") to
$ua->request($req->uri, ':content_file' => "$tmpdir/$src" );
The docs mention that using :content_file means the actual response content is no longer stored in the object, so you may have to adapt your code.
If all you need is the file, I would adapt the final if to check for the file that should now exist ( which should, I believe, only happen upon successful response ).
If what you need is the $res object to use later, I would probably opt to check the response worked, then print the content to the file you want it in, so just do
my $res = $ua->request( $req->uri );
adapt the if accordingly and print content to file.
I would personally prefer the latter, as I can check the response for success, then print: this seems much clearer.

Sybase Warning messages from perl DBI

I am connecting to sybase 12 from a perl script and calling storedprocs, I get the following warnings
DBD::Sybase::db prepare failed: Server message number=2401 severity=11 state=2 line=0 server=SERVER_NAME text=Character
set conversion is not available between client character set 'utf8' and server character set 'iso_1'.
Server message number=2411 severity=10 state=1 line=0 server=SERVER_NAME text=No conversions will be done.
at line 210.
Now, I understand these are only warnings, and my process works perfectly fine, but I am calling my stored proc in a loop and throughout the day and hence it creates a lot of warning message in my log files which causes the entire process to run a bit slower than expected. Can someone help me how can i suppress these please?
You can use a callback to handle the messages you want ignored. See the DBD::Sybase docs. The below is derived from the docs. You specify the message numbers you would like to ignore.
%blocked_msgs = map { $_ => 1 } ( 2401, 2411 );
sub err_handler {
my($err, $sev, $state, $line, $server, $proc, $msg, $sql, $err_type) = #_;
if ( exists $blocked_msgs{$err} ) { # it's a blocked message
return 0; # This is not an error
}
return 1;
}
This is how you might use it:
$dbh = DBI->connect('dbi:Sybase:server=troll', 'sa', '');
$dbh->{syb_err_handler} = \&err_handler;
$dbh->do("exec someproc");
$dbh->disconnect;

Examples from Net::RabbitMQ not working

I'm trying to learn RabbitMQ for a project I'm working on. My research showed two libraries to use, Net::RabbitMQ and AnyEvent::RabbitMQ. AnyEvent::RabbitMQ seems overly baroque for my needs but Net::RabbitMQ does not appear to work as the examples show it should.
Below is some example code I found, it matches what I saw in the POD, but it isn't working.
#!/usr/bin/env perl
use strict;
use warnings;
use Net::RabbitMQ;
{
# closure to return a new channel ID every time we call nextchan
my $nextchan = 1;
sub nextchan { return $nextchan++ }
}
### BEGIN CONFIGURABLE PARAMETERS ######################################
my $qserver = q{xx.xx.xx.xx};
my %qparms = ();
my $qname = q{gravity.checks};
my $message = q{Test injection};
### NO CONFIGURABLE PARAMETERS BELOW THIS LINE #########################
my $mq = Net::RabbitMQ->new();
my $chanID = nextchan();
$message .= " " . scalar(localtime);
print STDERR qq{Will try to send message "$message" through channel $chanID};
$mq->connect( $qserver, %qparms );
It errors out :
$. / send . pl
Will try to send message "Test injection Fri Nov 14 06:50:44 2014" through channel 1 Usage : Net::RabbitMQ::connect( conn, hostname, options ) at . /send.pl line 28.
The problem is that the %qparams need to be passed by reference and not directly. The change line 28 to :
$mq->connect($qserver, \%qparms) ;
Solved my problem.
It doesn't error out. It prints to STDERR without checking if an error occured. It says I'll try and then it does:
$mq->connect( $qserver, %qparms );
This is just an information, not an error.

How can my previously untainted data become tainted again?

I have a bit of a mystery here that I am not quite understanding the root cause of. I am getting an 'Insecure dependency in unlink while running with -T switch' when trying to invoke unlink from a script. That is not the mystery, as I realize that this means Perl is saying I am trying to use tainted data. The mystery is that this data was previously untainted in another script that saved it to disk without any problems.
Here's how it goes... The first script creates a binary file name using the following
# For the binary file upload
my $extensioncheck = '';
my $safe_filename_characters = "a-zA-Z0-9_.";
if ( $item_photo )
{
# Allowable File Type Check
my ( $name, $path, $extension ) = fileparse ( $item_photo, '\..*' );
$extensioncheck = lc($extension);
if (( $extensioncheck ne ".jpg" ) && ( $extensioncheck ne ".jpeg" ) &&
( $extensioncheck ne ".png" ) && ( $extensioncheck ne ".gif" ))
{
die "Your photo file is in a prohibited file format.";
}
# Rename file to Ad ID for adphoto directory use and untaint
$item_photo = join "", $adID, $extensioncheck;
$item_photo =~ tr/ /_/;
$item_photo =~ s/[^$safe_filename_characters]//g;
if ( $item_photo =~ /^([$safe_filename_characters]+)$/ ) { $item_photo = $1; }
else { die "Filename contains invalid characters"; }
}
$adID is generated by the script itself using a localtime(time) function, so it should not be tainted. $item_photo is reassigned using $adID and $extensioncheck BEFORE the taint check, so the new $item_photo is now untainted. I know this because $item_photo itself has no problem with unlink itself latter in the script. $item_photo is only used long enough to create three other image files using ImageMagick before it's tossed using the unlink function. The three filenames created from the ImageMagick processing of $item_photo are created simply like so.
$largepicfilename = $adID . "_large.jpg";
$adpagepicfilename = $adID . "_adpage.jpg";
$thumbnailfilename = $adID . "_thumbnail.jpg";
The paths are prepended to the new filenames to create the URLs, and are defined at the top of the script, so they can't be tainted as well. The URLs for these files are generated like so.
my $adpageURL = join "", $adpages_dir_URL, $adID, '.html';
my $largepicURL = join "", $adphotos_dir_URL, $largepicfilename;
my $adpagepicURL = join "", $adphotos_dir_URL, $adpagepicfilename;
my $thumbnailURL = join "", $adphotos_dir_URL, $thumbnailfilename;
Then I write them to the record, knowing everything is untainted.
Now comes the screwy part. In a second script I read these files in to be deleted using the unlink function, and this is where I am getting my 'Insecue dependency' flag.
# Read in the current Ad Records Database
open (ADRECORDS, $adrecords_db) || die("Unable to Read Ad Records Database");
flock(ADRECORDS, LOCK_SH);
seek (ADRECORDS, 0, SEEK_SET);
my #adrecords_data = <ADRECORDS>;
close(ADRECORDS);
# Find the Ad in the Ad Records Database
ADRECORD1:foreach $AdRecord(#adrecords_data)
{
chomp($AdRecord);
my($adID_In, $adpageURL_In, $largepicURL_In, $adpagepicURL_In, $thumbnailURL_In)=split(/\|/,$AdRecord);
if ($flagadAdID ne $adID_In) { $AdRecordArrayNum++; next ADRECORD1 }
else
{
#Delete the Ad Page and Ad Page Images
unlink ("$adpageURL_In");
unlink ("$largepicURL_In");
unlink ("$adpagepicURL_In");
unlink ("$thumbnailURL_In");
last ADRECORD1;
}
}
I know I can just untaint them again, or even just blow them on through knowing that the data is safe, but that is not the point. What I want is to understand WHY this is happening in the first place, as I am not understanding how this previously untainted data is now being seen as tainted. Any help to enlighten where I am missing this connection would be truly appreciated, because I really want to understand this rather than just write the hack to fix it.
Saving data to a file doesn't save any "tainted" bit with the data. It's just data, coming from an external source, so when Perl reads it it becomes automatically tainted. In your second script, you will have to explicitly untaint the data.
After all, some other malicious program could have changed the data in the file before the second script has a chance to read it.