ESAPI logger with application server - esapi

I used ESAPI jar for validation. When i call isValidInput(Context, input.trim(), ValidateConstant.APLHA_NUMERIC_TYPE, maxLength, true); or isValidInput(Context, input, ValidateConstant.NUMERIC_TYPE, maxLength, true);
and the input is wrong with sepecial char.
then it throws some like
org.owasp.esapi.errors.ValidationException: input: Invalid input. Please conform to regex ^[0-9]*$ with a maximum length of 15
at org.owasp.esapi.reference.validation.StringValidationRule.checkWhitelist(StringValidationRule.java:144)
at org.owasp.esapi.reference.validation.StringValidationRule.checkWhitelist(StringValidationRule.java:160)
at org.owasp.esapi.reference.validation.StringValidationRule.getValid(StringValidationRule.java:284)
at org.owasp.esapi.reference.DefaultValidator.getValidInput(DefaultValidator.java:214)
at org.owasp.esapi.reference.DefaultValidator.isValidInput(DefaultValidator.java:152)
at org.owasp.esapi.reference.DefaultValidator.isValidInput(DefaultValidator.java:143)
This is shown when I execute the program with stand alone.
How am I to integrate this Exception in my applicaion server.log file?

IntrusionDetector.org.owasp.esapi.errors.IntegrityException.actions=log,disable,logout
# rapid validation errors indicate scans or attacks in progress
# org.owasp.esapi.errors.ValidationException.count=10
# org.owasp.esapi.errors.ValidationException.interval=10
# org.owasp.esapi.errors.ValidationException.actions=log,logout
# sessions jumping between hosts indicates session hijacking
IntrusionDetector.org.owasp.esapi.errors.AuthenticationHostException.count=2
IntrusionDetector.org.owasp.esapi.errors.AuthenticationHostException.interval=10
IntrusionDetector.org.owasp.esapi.errors.AuthenticationHostException.actions=log,logout
#===========================================================================
# ESAPI Validation
#
# The ESAPI Validator works on regular expressions with defined names. You can define names
# either here, or you may define application specific patterns in a separate file defined below.
# This allows enterprises to specify both organizational standards as well as application specific
# validation rules.

Related

Shell.Application & errors

To clarify, I am specifically asking about how to HANDLE errors when using Shell.Application. Or more to the point, CAN one handle errors, and IF SO, how. The font example is just that, an example of the current situation I am trying to solve. But the question remains, can one handle (not avoid, handle) an error when using Shell.Application? There may well be some subtlety to the answer, I.e in general you can, but specifically not in the fonts example. That seems to be the case, and I am inclined to decide that Shell.Application is old, broken technology that should simply not be used, because it can't be robust in all use cases, which to me says it's potentially fragile in any use case.
I am attempting to refine my use of Shell.Application.CopyHere(), specifically for use installing fonts. What I hope to do is address the occasional error where a font file is corrupt or otherwise not a valid font file.
So far as I can tell from this there really is no way to address this. I can use argument value 16 Respond with "Yes to All" for any dialog box that is displayed. to perhaps get past the error, but with no return code I have no way to log a resulting error. Using .CopyHere() in PowerShell with a Try/Catch doesn't work either. Is this just old technology from a time when Microsoft just accepted things failing ungracefully? Or am I missing a technique that solves the issue?
EDIT: Based on that link I provided, I tried the 1024 argument, Do not display a user interface if an error occurs. Like so
$fontFolder.CopyHere($fontFilePath, 1024)
Doesn't seem to do what it says it does, since I am seeing a dialog that says
Cannot install bogus.ttf
The file ... does not appear to be a valid font.
So, not only not able to get a meaningful error back, but the presence of an error disrupts execution of the script and requires user interaction. Ugh.
EDIT 2:
Not really a minimal code example, since my question is CAN this be done, even before how. But this is what I just tried.
fontFilePath = '\\px\Rollouts\Misc\Fonts\bogus.ttf'
$fontFolderPath = "$env:windir\Fonts"
$fontFolder = $(New-Object -ComObject:Shell.Application).Namespace($fontFolderPath)
$fontFolder.CopyHere($fontFilePath, 1024)
Based on what that 1024 argument claims to do, I would expect this to fail, but also not stop processing while a dialog waits for user interaction.
Also, worth noting that bogus.ttf is simply an empty text file renamed with TTF extension. So, guaranteed not to successfully install a font.
It seems that the Fonts folder, while it does implement Folder.CopyHere, does not evaluate the flags passed in the second parameter. Doesn't seem there are any other standard utilities to install fonts programmatically, or verify the integrity of the TTF non-interactively.
So this leaves us the option of rolling our own Font registration using the Win32 API. Basically, you have to copy the font to the Fonts folder:
# For .NET versions earlier than 4.0, hardcode to C:\Windows\Fonts
# Fonts special folder is new in 4.0
$fontDir = [System.Environment]::GetFolderPath([System.Environment.SpecialFolder]::Fonts)
Copy-Item C:\Path\To\Font.ttf "${fontDir}\Font.ttf"
This next call isn't strictly necessary, as it only adds the font temporarily to your user session, but it does function as an integrity check to see if the font is valid and can be imported. You need to P/Invoke AddFontResource from the Win32 API:
Add-Type -Name Gdi32 -Namespace Win32 -MemberDefinition #"
[System.Runtime.InteropServices.DllImport("gdi32.dll")]
public static extern int AddFontResource(string lpszFilename);
"#
# Returns 1 on success or 0 on failure, if you want your error checking here
[Win32.Gdi32]::AddFontResource("${fontDir}\FontFileName.ttf")
And then register it at the following registry key. This piece is necessary for persisting the font you copied to $fontDir:
New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts' -Name "FontName" -Value "FontFileName.ttf"
You can programmatically get the FontName using the PrivateFontCollection class, and the Families[x].Name property under that, where x is the index of the font in the collection you want the name for.
To address your edit, there isn't going to be a "one size fits all" approach when it comes to handling errors in Shell.Application. There are multiple reasons to avoid using Shell.Application, such as:
APIs are wrought with legacy behavior, such as using modal dialogs to report errors
Windows standards are implemented inconsistently, such as Fonts ignoring the documented CopyFile flags
Not available in a non-interactive session
The first two points are cases the example in your question hits. The third comes into play in many automation scenarios, especially when a non-interactive service accounts are used to execute commands. There are very few cases where something can only be done via Shell.Application, and as such it's almost always best to avoid it if there is an alternative API. Your font installation scenario is an excellent example of this.

Are there any built-in keys I can use in my segmentation rules in Intuit's Wasabi?

I am trying to use user segmentation rules in Wasabi. Are there any built-in keys that I can use?
You can segment on these three things:
The passed segmentation profile
The HTTP Header values
The context parameter
For example, if you wanted to segment on the context, you would build a segmentation rule context = QA, for getting users of chrome a (naive) approach would be to use a segmentation rule that matches the regular expression ".*chrome.*". So for example User-Agent =~ ".*curl.*" & context = "QA" would match users who are in the context QA (?context=QA) and use curl.
Note that sadly the alphabetical operators (e.g. contains) which you can select in the UI are currently broken, we are working on a fix for the (still precompiled) library Hyrule which leverages those rules. For now I recommend converting your rules to the text view or test them to see if they work or not. There is also a validation in place when you try to save a rule.

How to produce business rule output that can be examined in the ODM Rule Execution Server Console?

I am new to ODM 8.5 (the successor to JRules), and I am trying to test some rules in the ODM Rule Execution Server Console. At this point, I'm merely trying to confirm that my rule changes have been deployed to the RES successfully. According to ODM's Testing Ruleset Execution help page, I should be able to examine the Output text box to see "strings that are written to print.out" from the web page under Explorer > RuleApps > RuleApp > Ruleset > Test Ruleset. I've deployed a rule containing the following snippet:
However, after executing the rule, I don't see the output of the println in the Output box. Is println what the documentation refers to when they say "print.out"? I get syntax errors if I try to replace "System.out.println" with "print.out". How can I get simple debug output to appear in the Output box?
The note method will cause output to go to the Output text box of the ODM Rule Execution Server Console, e.g., use:
note("*** This is the rule modification ***");
You can use the Decision warehouse(DW), in RES console.
First you need to activate the tracing in the ruleset properties.
Then after an execution, you can search in DW for execution informations such as, rule executed, data values, etc... Check online documetation details(look for ODM IBM 8.5)
Please note that this may slow down your decisions, so better not use this feature in production systems. Hope this helps.

Perl parsing a log4j log [duplicate]

We have several applications that use log4j for logging. I need to get a log4j parser working so we can combine multiple log files and run automated analysis on them. I'm not looking to reinvent the wheel, so can someone point me to a decent pre-existing parser? I do have the log4j conversion pattern if that helps.
If not, I'll have to roll our own.
I didn't realize that Log4J ships with an XML appender.
Solution was: specify an XML appender in the logging configuration file, include that output XML file as an entity into a well formed XML file, then parse the XML using your favorite technique.
The other methods had the following limitations:
Apache Chainsaw - not automated enough
jdbc - poor performance in a high performance distributed app
You can use OtrosLogViewer with batch processing. You have to:
Define you log format, you can use Log4j pattern layout parser or Log4j XmlLayout
Create java class that implements LogDataParsedListener. Method public void logDataParsed(LogData data, BatchProcessingContext context) will be called on every parsed log event.
Create jar
Run OtrosLogViewer with specifying your log processing jar, LogDataParsedListener implementation and log files.
What you are looking for is called SawMill, or something like it.
Log4j log files aren't really suitable for parsing, they're too complex and unstructured. There are third party tools that can do it, I believe (e.g. Sawmill).
If you need to perform automated, custom analysis of the logs, you should consider logging to a database, and analysing that. JDBC ships with the JdbcAppender which appends all messages to a database of your choice, but it has performance implications, and it's a bit flaky. There are other, similar, alternatives on the interweb, though (like this one).
You -can- use Log4j's Chainsaw V2 to process the various log files and collect them into one table, and either output those events as xml or use Chainsaw's built-in expression-based filtering, searching & colorizing support to slice & dice the logs.
Steps:
- Start Chainsaw V2
- Create a chainsaw configuration file by copying the example configuration file available from the Welcome tab - define one LogFilePatternReceiver 'plugin' entry for each log file that you want to process
- Start Chainsaw with that configuration
- Each log file will end up as a separate tab in the UI
- Pause the chainsaw-log tab and clear the events from that tab
- Create a new tab which aggregates the events from the various tabs by going to the 'view, crate custom expression logpanel' menu item and enter 'level >= DEBUG' in the box. It will create a new tab containing events from all of the tabs with level >= debug (which is why you cleared the chainsaw-log tab).
You can get an overview of the expression syntax used to filter, colorize and search from the tutorial (available from the Help menu).
If you don't want to use Chainsaw, you can do something similar - start a simple app that doesn't log but loads a log4j.xml config file with the 'plugin' entries you defined for the Chainsaw configuration, but also define a FileAppender with an xmllayout - all of the events received by the 'receivers' will be sent to the single appender.

Catalyst Not Accepting DBIx Generated Schema

I am using Catalyst::Plugin::AutoCRUD and am generating a DBIx schema using the instructions provided in the linked CPAN page. Specifically, I copy/pasted the command listed there and changed only the details relevant to my database ('pg' => 'mysql', different username/pw, etc).
I now have a schema DBIC::Database::foo::Schema. Schema is both a file containing *.pm's for each table in my DB and also it's own Schema.pm.
My config file contains the following entry:
<Model::AutoCRUD::DBIC>
schema_class Database::foo::Schema
connect_info dbi:mysql:dbname=foo
connect_info user
connect_info pass
<connect_info>
AutoCommit 1
</connect_info>
</Model::AutoCRUD::DBIC>
When I go to start the AutoCRUD server, I get the following error message:
Couldn't instantiate component "DemoApp::Model::AutoCRUD::DBIC", "Attribute (schema_class)
does not pass the type constraint because: Validation failed for
'Catalyst::Model::DBIC::Schema::Types::SchemaClass' with value Database::foo::Schema at
/Library/Perl/5.12/darwin-thread-multi-2level/Moose/Meta/Attribute.pm line 1275.
As I am new to Catalyst and this plugin, I don't know how to resolve this issue. Google has not been very helpful - I found this discussion, but from what I can tell the issue was that Catalyst was being pointed towards the wrong *.pm (although I could be misreading this).
In case this is helpful, here are the contents of Schema.pm:
use utf8;
package DBIC::Database::foo::Schema;
# Created by DBIx::Class::Schema::Loader
# DO NOT MODIFY THE FIRST PART OF THIS FILE
use strict;
use warnings;
use base 'DBIx::Class::Schema';
__PACKAGE__->load_namespaces;
# Created by DBIx::Class::Schema::Loader v0.07024 # 2012-05-20 07:25:21
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:cevz/k4rUWIcEhMl29r0QA
# You can replace this text with custom code or comments, and it will be preserved on regeneration
1;
Please help!
Your schema is named DBIC::Database::Foo::Schema but in the config file you have Database::foo::Schema. The names are case sensitive so either change the name of your Schema path and files or correct the config.
Completely rebuilding the DBIC classes from the Catalyst manual solved the problem. While I cannot pinpoint what was unacceptable to Moose in the first set of classes, the second set of classes had one additional problem: the line __PACKAGE__->meta->make_immutable; was generated for every class (i.e. in each *.pm). Commenting it out and restarting Catalyst yielded a functioning CRUD app.