When I run tests which fail I get a huge output with a lot of markup hiding the error.
Example:
$ perl script/my_prove.pl t/2410-topinfo.t
t/2410-topinfo.t .. 1/?
# Failed test '200 OK'
# at t/2410-topinfo.t line 12.
# got: '500'
# expected: '200'
# Failed test 'similar match for selector "h1"'
# at t/2410-topinfo.t line 12.
# ''
# doesn't match '(?^:Flatinfo\ Business\-Apartment\ Hietzing)'
# Failed test 'content is similar'
# at t/2410-topinfo.t line 12.
# '<!DOCTYPE html>
# <html>
# <head>
# <title>Server error (development mode)</title>
# <meta http-equiv="Pragma" content="no-cache">
# <meta http-equiv="Expires" content="-1">
# <script src="/mojo/jquery/jquery.js"></script>
# <script src="/mojo/prettify/run_prettify.js"></script>
# <link href="/mojo/prettify/prettify-mojo-dark.css" rel="stylesheet">
# <style>
# a img { border: 0 }
# body {
#
# ........... lots of lines removed here ...........
#
# <div id="wrapperlicious">
# <div id="nothing" class="box spaced"></div>
# <div id="showcase" class="box code spaced">
# <pre id="error">Can't call method "name" on an undefined value at template extern/topinfo/show.html.ep line 2.
# </pre>
#
# .... lots of lines follow here ............
The error seems to be a single line:
Can't call method "name" on an undefined value at template extern/topinfo/show.html.ep line 2
The test-script producing this output is:
use Mojo::Base -strict;
use Test::More;
use Test::Mojo;
use FindBin;
require "$FindBin::Bin/../script/ba_db";
my $t = Test::Mojo->new( 'BaDb' );
$t->ua->max_redirects(1);
$t->get_ok('/info/penx2')
->status_is(200)
->text_like('h1' => qr/\QFlatinfo Business-Apartment Hietzing\E/)
->content_like( qr/\QSelected language: German\E/ )
# ...
;
done_testing();
Is there a way to tell Mojolicious to respond without all this HTML-Markup so that I can see the error-mesage immediately?
There are two things at play here.
The large debug output with the full page source is because the content_like method from Test::Mojo didn't find a match, and it's telling you in which string it was looking. That's a convenience method, but if the page is large, it's a lot of text. This might tell you that the test failed because the content was wrong. But in this specific case it didn't.
The real problem is that the test failed because you had a syntax error. You can already see that from the very first test.
$t->get_ok('/info/penx2')
->status_is(200)
This test also failed. (It's a bit confusing for people who are used to Test::WWW::Mechanize because there get_ok will also check if the response was 200 OK).
# Failed test '200 OK'
# at t/2410-topinfo.t line 12.
# got: '500'
# expected: '200'
The actual error message should be there without all that HTML markup somewhere else, because while it was doing the get_ok it would have encountered the error, which should have gone to the application log. In a unit-test, that probably is STDERR.
I don't know if you've not included it, or if it's omitted. The log should be there too I believe.
Getting back to the HTML and the actual question, the reason it's output is because Test::Mojo's content_like (and most other of its methods) uses Test::More under hood. It just dispatches to like from Test::More and passes along the page content. This in turn will always display the full string it was matching against.
In recent Test::More versions, it already uses Test2 under the hood. The relevant part that outputs the full string is here.
Unfortunately there is not much you can do about it. I'd focus on finding out why it doesn't show a proper log during the unit tests (possibly because you didn't run prove with -v), and maybe find a way to make errors come out in color, which would make it easier to read. There is a color logger for the Dancer2 framework (which I maintain), but I can't find one for Mojo there wasn't one for Mojo.
Now there is Mojo::Log::Colored, which can color individual log lines based on their log level.
use Mojo::Log::Colored;
# Log to STDERR
$app->log(
Mojo::Log::Colored->new(
# optionally set the colors
colors => {
debug => "bold bright_white",
info => "bold bright_blue",
warn => "bold green",
error => "bold yellow",
fatal => "bold yellow on_red",
}
)
);
This will give you nice colorful output to the console. Here's an example script.
$ MOJO_LOG_LEVEL=debug perl -Mojo -MMojo::Log::Colored \
-e 'a(
"/" => sub {
app->log->$_("hello world") for qw/debug info warn error fatal/;
shift->render(text=>"ok");
})->log( Mojo::Log::Colored->new )->start' \
daemon
And the output if called with $ curl localhost:3000.
Related
I have code (some lines are removed):
package MaitreD::Command::bank_statement;
use Mojo::Base 'Mojolicious::Command';
sub run {
...
my $payments = read_file( $file ); # line 58
...
}
use XBase; # line 174
sub read_file {
...
}
1;
I run my application. And then do two http requests to this app. Controller runs this command as:
$c->app->commands->run( bank_statement => $upload );
I get next error (this one is expected):
Can't locate XBase.pm in #INC (you may need to install the XBase module) (#INC contains: /opt/monkeyman/lib /opt/monkeyman/local/lib/perl5/x86_64-linux /opt/monkeyman/local/lib/perl5 /opt/monkeyman/lib /opt/monkeyman/local/lib/perl5/5.24.1/x86_64-linux /opt/monkeyman/local/lib/perl5/5.24.1 /opt/monkeyman/local/lib/perl5/x86_64-linux /opt/monkeyman/local/lib/perl5 /opt/perlbrew/perls/perl-5.24.1/lib/site_perl/5.24.1/x86_64-linux /opt/perlbrew/perls/perl-5.24.1/lib/site_perl/5.24.1 /opt/perlbrew/perls/perl-5.24.1/lib/5.24.1/x86_64-linux /opt/perlbrew/perls/perl-5.24.1/lib/5.24.1 .) at /opt/monkeyman/lib/MaitreD/Command/bank_statement.pm line 174.
BEGIN failed--compilation aborted at /opt/monkeyman/lib/MaitreD/Command/bank_statement.pm line 174.
Compilation failed in require at (eval 2620) line 1.
But when I did second request I got different error:
Undefined subroutine &MaitreD::Command::bank_statement::read_file called at /opt/monkeyman/lib/MaitreD/Command/bank_statement.pm line 58.
How MaitreD::Command::bank_statement::run could be run from controller if module MaitreD::Command::bank_statement compilation failed?
If understand correct the module MaitreD::Command::bank_statement was compiled partially to 174 line. So next http request to app can call MaitreD::Command::bank_statement::run and when 58 line is reached I get Undefined subroutine &M::C::b::read_file called because nothing is compiled after 174 line.
How to prevent partial compilation?
I want if there are some errors occur then nothing from MaitreD::Command::bank_statement should be available
It seems you should be focusing on ensuring use XBase actually succeeds, beause presumably it's there for a reason and it's needed for the rest of the program to work.
Why does it fail? Fix that first and the partial compilation isn't a problem.
In this case, why can't perl find the module?
Is it possible the Command::bank_statement class isn't used directly, but only when it's being run, so maybe the current working directory changed between the program start and the time $c->app->commands->run( bank_statement => $upload ); was called?
If that's the case, try loading the command class earlier. e.g. add this to the Mojo application class (probably something like lib/MaitreD.pm:
use MaitreD::Command::bank_statement;
recently I was debugging a piece of code and found the following usage in the template tool kit usage in the template.
#Constants.pm
# Bugzilla version
use constant BUGZILLA_VERSION => "4.0.11";
#template file index.html.tmpl
[% PROCESS global/header.html.tmpl
header_addl_info = "version $constants.BUGZILLA_VERSION"
style_urls = [ 'skins/standard/index.css' ]
%]
#index.cgi
use Bugzilla::Constants;
.......
print "buzilla version : $constants.BUGZILLA_VERSION <br/>";
When I am using the same syntax in the main cgi script , giving error 500
print "buzilla version : ".Bugzilla::Constants::BUGZILLA_VERSION." <br/>";
'.' probably means something different in Template::Toolkit.
In Perl you just use BUGZILLA_VERSION:
$ perl -E 'use constant BUGZILLA_VERSION=>"4.0.11"; say BUGZILLA_VERSION'
4.0.11
$
I'm migrating some shell scripts to Chef recipes. Some of these scripts are fairly involved, so just to make life easier in the short term and to avoid introducing bugs in rewriting everything in Chef/Ruby, I'd like to just run some of them as-is. They're all well-written and idempotent, so honestly there's no rush, but of course, the eventual goal is to rewrite them.
One cool feature of Ruby is its __END__ keyword/method: Lines below __END__ will not be executed. Those lines will be available via the special filehandle DATA.
It would be cool to ship the shell scripts as-is inside the the recipe after __END__, maybe something like the following, which I placed in chef-repo/cookbooks/ruby-data-test/recipes/default.rb:
file = Tempfile.new(File.basename(__FILE__))
file << DATA.read
bash file.path
file.unlink
__END__
echo "Hello, world"
However when I run this (with chef-solo -c solo.rb --override-runlist 'recipe[ruby-data-test]'), I get the following error:
[2014-10-03T17:14:56+00:00] ERROR: uninitialized constant Chef::Recipe::DATA
I'm pretty new to Chef, but I'm guessing the above is something about Chef wrapping my recipe in a class, and there's something simple preventing me from accessing DATA. Since it's "global" (?) I tried putting a dollar sign ($DATA) in front of it but that failed with:
NoMethodError
-------------
undefined method `read' for nil:NilClass
So the question is: How do I access DATA in my Chef recipe? Thanks!
It appears you don't have access to DATA, but you can fake it by reading in the current file yourself and splitting on __END__, like Sinatra does.
I ended up making a Chef LWRP for reuse. I don't know if I'll actually end up using this, but I wanted to figure it out. Like I said, I'm a Chef/Ruby noob, so any better ideas or suggestions welcome!
ruby_data_test/recipes/default.rb:
ruby_data_test_execute_ruby_data __FILE__
__END__
#!/bin/bash
set -o errexit
date
echo "Hello, world"
ruby_data_test/resources/execute_ruby_data.rb:
actions :execute_ruby_data
default_action :execute_ruby_data
attribute :source, :name_attribute => true, :required => true
attribute :args, :kind_of => Array
attribute :ignore_errors, :kind_of => [TrueClass, FalseClass], :default => false
ruby_data_test/providers/execute_ruby_data.rb:
def whyrun_supported?
true
end
use_inline_resources
action :execute_ruby_data do
converge_by("Executing #{#new_resource}") do
Chef::Log.info("Executing #{#new_resource}")
file_who_called_me = #new_resource.source
io = ::IO.respond_to?(:binread) ? ::IO.binread(file_who_called_me) : ::IO.read(file_who_called_me)
app, data = io.gsub("\r\n", "\n").split(/^__END__$/, 2)
data.lstrip!
file = Tempfile.new('execute_ruby_data')
file << data
file.chmod(0755)
file.close
exit_status = ::Open3.popen2e(file.path, *#new_resource.args) do |stdin, stdout_and_stderr, wait_thr|
stdout_and_stderr.each { |line| puts line }
wait_thr.value # exit status
end
if exit_status != 0 && !#new_resource.ignore_errors
throw RuntimeError
end
end
end
Here's the output:
$ chef-solo -c solo.rb --override-runlist 'recipe[ruby_data_test]'
Starting Chef Client, version 11.12.4
[2014-10-03T21:50:29+00:00] WARN: Run List override has been provided.
[2014-10-03T21:50:29+00:00] WARN: Original Run List: []
[2014-10-03T21:50:29+00:00] WARN: Overridden Run List: [recipe[ruby_data_test]]
Compiling Cookbooks...
Converging 1 resources
Recipe: ruby_data_test::default
* ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb] action execute_ruby_dataFri Oct 3 21:50:29 UTC 2014
Hello, world
- Executing ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb]
Running handlers:
Running handlers complete
Chef Client finished, 1/1 resources updated in 1.387608 seconds
I want to protect all forms from CSRF with Dancer.
I tried using Plack::Middleware::CSRFBlock, but the error said "CSRFBlock needs Session.". Even if I use Plack::Session, forms didn't have a hidden input field that contains one time token.
Are there any good practice to do this? Any advice much appreciated.
my environment/development.yml is:
# configuration file for development environment
# the logger engine to use
# console: log messages to STDOUT (your console where you started the
# application server)
# file: log message to a file in log/
logger: "console"
# the log level for this environment
# core is the lowest, it shows Dancer's core log messages as well as yours
# (debug, info, warning and error)
log: "core"
# should Dancer consider warnings as critical errors?
warnings: 1
# should Dancer show a stacktrace when an error is caught?
show_errors: 1
# auto_reload is a development and experimental feature
# you should enable it by yourself if you want it
# Module::Refresh is needed
#
# Be aware it's unstable and may cause a memory leak.
# DO NOT EVER USE THIS FEATURE IN PRODUCTION
# OR TINY KITTENS SHALL DIE WITH LOTS OF SUFFERING
auto_reload: 0
session: Simple
#session: YAML
plack_middlewares:
-
#- Session
- CSRFBlock
- Debug
- panels
-
- Parameters
- Dancer::Version
- Dancer::Settings
- Memory
and the route is:
get '/test' => sub {
return <<EOM
<!DOCTYPE html>
<html>
<head><title>test route</title></head>
<body>
<form action="./foobar" method="post">
<input type="text"/>
<input type="submit"/>
</form>
</body>
</html>
EOM
};
Well, I noticed the Debug panel isn't shown, meaning Plack::Middlewares::Debug isn't loaded.
With help from How to use Dancer with Plack middlewares | PerlDancer Advent Calendar and Plack::Middleware::Debug::Dancer::Version I managed to turn it on
session: PSGI
## Dancer::Session::PSGI
plack_middlewares:
-
- Session
-
- CSRFBlock
-
- Debug
## panels is an argument for Debug, as in
## enable 'Debug', panels => [ qw( Parameters Response Environment Session Timer Dancer::Logger Dancer::Settings Dancer::Version ) ];
- panels
-
- Parameters
- Response
- Environment
- Session
- Timer
- Dancer::Logger
- Dancer::Settings
- Dancer::Version
#Plack::Middleware::Debug::Dancer::Version
Has anyone ever experienced a unit test that fails and when they tried to debug it to find out where the failure was occurring, the unit test succeeds when running the code in the debugger?
I'm using Eclipse 3.5.1 with EPIC 0.6.35 and ActiveState ActivePerl 5.10.0. I wrote module A and module B both with multiple routines. A routine in module B calls a bunch of routines from module A. I'm adding mock objects to my module B unit test file to try to get more complete code coverage on module B where the code in module B tests to see if all the calls to module As routines fail or succeed. So I added some mock objects to my unit test to force some of the module A routines to return failures, but I was not getting the failures as expected. When I debugged my unit test file, the calls to the module A routine did fail as expected (and my unit test succeeds). When I run the unit test file as normal without debugging, the call to the mocked Module A routine does not fail as expected (and my unit test fails).
What could be going on here? I'll try to post a working example of my problem if I can get it to fail using a small set of simple code.
ADDENDUM: I got my code whittled down to a bare minimum set that demonstrates my problem. Details and a working example of the problem follows:
My Eclipse project contains a "lib" directory with two modules ... MainModule.pm and UtilityModule.pm. My Eclipse project also contains at the top level a unit test file named MainModuleTest.t and a text file called input_file.txt which just contains some garbage text.
EclipseProject/
MainModuleTest.t
input_file.txt
lib/
MainModule.pm
UtilityModule.pm
Contents of the MainModuleTest.t file:
use Test::More qw(no_plan);
use Test::MockModule;
use MainModule qw( mainModuleRoutine );
$testName = "force the Utility Module call to fail";
# set up mock utility routine that fails
my $mocked = new Test::MockModule('UtilityModule');
$mocked->mock( 'slurpFile', undef );
# call the routine under test
my $return_value = mainModuleRoutine( 'input_file.txt' );
if ( defined($return_value) ) {
# failure; actually expected undefined return value
fail($testName);
}
else {
# this is what we expect to occur
pass($testName);
}
Contents of the MainModule.pm file:
package MainModule;
use strict;
use warnings;
use Exporter;
use base qw(Exporter);
use UtilityModule qw( slurpFile );
our #EXPORT_OK = qw( mainModuleRoutine );
sub mainModuleRoutine {
my ( $file_name ) = #_;
my $file_contents = slurpFile($file_name);
if( !defined($file_contents) ) {
# failure
print STDERR "slurpFile() encountered a problem!\n";
return;
}
print "slurpFile() was successful!\n";
return $file_contents;
}
1;
Contents of the UtilityModule.pm file:
package UtilityModule;
use strict;
use warnings;
use Exporter;
use base qw(Exporter);
our #EXPORT_OK = qw( slurpFile );
sub slurpFile {
my ( $file_name ) = #_;
my $filehandle;
my $file_contents = "";
if ( open( $filehandle, '<', $file_name ) ) {
local $/=undef;
$file_contents = <$filehandle>;
local $/='\n';
close( $filehandle );
}
else {
print STDERR "Unable to open $file_name for read: $!";
return;
}
return $file_contents;
}
1;
When I right-click on MainModuleTest.t in Eclipse and select Run As | Perl Local, it gives me the following output:
slurpFile() was successful!
not ok 1 - force the Utility Module call to fail
1..1
# Failed test 'force the Utility Module call to fail'
# at D:/Documents and Settings/[SNIP]/MainModuleTest.t line 13.
# Looks like you failed 1 test of 1.
When I right click on the same unit test file and select Debug As | Perl Local, it gives me the following output:
slurpFile() encountered a problem!
ok 1 - force the Utility Module call to fail
1..1
So, this is obviously a problem. Run As and Debug As should give the same results, right?!?!?
Both Exporter and Test::MockModule work by manipulating the symbol table. Things that do that don't always play nicely together. In this case, Test::MockModule is installing the mocked version of slurpFile into UtilityModule after Exporter has already exported it to MainModule. The alias that MainModule is using still points to the original version.
To fix it, change MainModule to use the fully qualified subroutine name:
my $file_contents = UtilityModule::slurpFile($file_name);
The reason this works in the debugger is that the debugger also uses symbol table manipulation to install hooks. Those hooks must be getting installed in the right way and at the right time to avoid the mismatch that occurs normally.
It's arguable that it's a bug (in the debugger) any time the code behaves differently there than it does when run outside the debugger, but when you have three modules all mucking with the symbol table it's not surprising that things might behave oddly.
Does your mocking manipulate the symbol table? I've seen a bug in the debugger that interferes with symbol table munging. Although in my case the problem was reversed; the code broke under the debugger but worked when run normally.