Two SecDefaultAction in OWASP ModSecurity Core Ruleset - owasp

I just downloaded the latest version (2.2.9) of the OWASP ModSecurity Core Ruleset.
In the provided "modsecurity_crs_10_setup.conf.example" there are two SecDefaultAction directive right next to each other:
SecDefaultAction "phase:1,deny,log"
SecDefaultAction "phase:2,deny,log"
I thought that as soon as a new SecDefaultAction directive is defined this one will be used for the following rules. Therefore I do not understand what the purpose of
SecDefaultAction "phase:1,deny,log"
is when another SecDefaultAction is defined immediately afterwards.
Thanks,
Ronald

The second action is what happens if you exceed your threshold during phase 2 in stead of in phase one.
https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#processing-phases

Related

Shibboleth upgrade - deprecated features in v4.1.0

I recently upgraded Shibboleth from v4.0.1 to v4.1.0. After the upgrade, I get the deprecated warning message regarding SAML2NameID. I am using this feature in "attribute-resolver.xml" conf file and couldn't find its replacement in the Shibboleth documentation. Can anyone suggest what should I use in place of SAML2NameID?
WARN [DEPRECATED:125] - [:] - xsi:type 'SAML2NameID', (file [conf/attribute-resolver.xml]): This will be removed in the next major version of this software; replacement is (none)
I am using this feature in "attribute-resolver.xml" conf file and couldn't find its replacement in the Shibboleth documentation.
Using SAML2NameID encoders in the resolver is now deprecated, and you're given specific configuration to generate name ids for different saml protocols, SPs, per attributes, etc.
Per the docs,
The saml-nameid.xml configuration file defines two list beans, each one an ordered list of "generator" plugins for the two different SAML versions. Each plugin is specific to an identifier Format, a SAML constant that identifies the kind of value being expressed. The generation process involves selecting a list of Formats to try and generate (see Format Selection below), and then trying each Format until an appropriate value is obtained by running each configured generator in order.
Please see this link.

What is the practical difference between a sub-workflow and the includes directive? [Snakemake]

In the Snakemake documentation, the includes directive can incorporate all of the rules of another workflow into the main workflow and apparently can show up in snakemake --dag -n | dot -Tsvg > dag.svg. Sub-workflows, on the other hand, can be executed prior to the main workflow should you develop rules which depend on their output.
My question is: how are these two really different? Right now, I am working on a workflow, and it seems like I can get by on just using includes and putting the name of the output in rule all of the main workflow. I could probably even place the output in the input of a main-workflow rule, making the includes workflow execute prior to that rule. Additionally, I can't visualize a DAG which includes the sub-workflow, for whatever reason. What do sub-workflows offer that the includes directive can't do?
The include doesn't "incorporate another workflow". It just adds the rules from another file, like if you add them with copy/paste (with a minor difference that include doesn't affect your target rule). The subworkflow has an isolated set of rules that work together to produce the final target file of this subworkflow. So it is well structured and isolated from both main workflow and other subworkflows.
Anyway, my personal experience shows that there are some bugs in Snakemake that make using subworkflows quite difficult. Including the file is pretty straightforward and easy.
I've never used subworkflows, but here's a case where it may be more convenient to use them rather than the include directives. (In theory, I think you don't need include and subworkflow as you could write everything in a massive Snakefile, the point is more about convenience.)
Imagine you are writing a workflow that depends on result files from a published work (or from a previous project of yours). The authors did not make public the files you need but they provide a snakemake workflow to produce them. Their snakemake workflow may be quite complex and the files you need may be just intermediate steps. So instead of making sense of the all workflow and parsing it into your own include directives, you use subworkflow to generate the required file(s). E.g.:
subworkflow jones_etal:
workdir:
"./jones_etal"
snakefile:
"./jones_etal/Snakefile"
rule all:
input:
'my_results.txt',
rule one:
input:
jones_etal('from_jones.txt'),
output:
'my_results.txt',
...

How do I configure the ModSecurity engine to be ON for a single attack type and DetectionOnly for all others?

I need to gradually implement ModSecurity. It must be configured to only block attacks by a single attack type (e.g. SQLi), but log all other attacks from the other attack types.
For ease of upgrading the owasp rules, it is recommended to avoid modifying the original owasp rules. Ideally I'm looking for a solution which will follow this guideline and won't require modifying the original owasp rules.
Currently my test configuration is only accomplishing part of this. With this Debian installation of ModSecurity, I have removed individual rule files from /usr/share/modsecurity-crs/rules/*.conf from the configuration. This allows me to enable ModSecurity with engine=on and only the rule sets for the particular attack type loaded in the configuration, but it is not logging the incidents of other attack types.
You’ve a few options:
1) Use anomaly scoring and the sql_injection_score value that the OWASP CRS sets for SQLi rules.
Set your mode to DetectionOnly.
Set your anomaly scoring values very high in
Add a new rule that blocks if sql_injection_score is above a certain amount.
This can be achieved with an extra rule like this:
SecRule tx.sql_injection_score "#gt 1”
"id:9999,\
phase:5,\
ctl:ruleEngine=on \
block"
Setting the ”#gt 1” to an appropriate threshold.
The OWASP CRS sets similar variables for other categories as well.
2) Load rules individually and rules before and after to turn rule engine on and off.
Within a phase rules are executed in order specified. You can use this to have config like the following:
SecRuleEngine DetectionOnly
Include rules/other_rules_1.conf
Include rules/other_rules_2.conf
SecAction “id:9000, phase:2, ctl: ctl:ruleEngine=on”
Include rules/sqli_rules.conf
SecAction “id:9001, phase:2, ctl: ctl:ruleEngine=off”
Include rules/other_rules_3.conf
Include rules/other_rules_4.conf
However if a category contains several phases then you’ll need to add several SecActions - one for each phase used.
3) Active the rules you want by altering the Actions to include turning on the ruleEngine.
Set your mode to DetectionOnly.
Use SecRuleUpdateActionById to add a ctl:ruleEngine=on to the rules you want on. It would be nice if there was a SecRuleUpdateActionByTag or SecRuleAddActionByTag but there isn’t (though it has been asked for in the past).
This is probably a bit fragile as depends on knowing the specific rule ids and also requires checking the actions per rule or assuming they are all the same. Probably better to just edit the CRS files to be honest.
This is probably the best if you want to only enable a set of rules, rather than a full category.
4) Edit the files, to do the same as above directly.
This is not a bad option if you know this will be a short term option and eventually you hope to enable all rules anyway. Revert the file back when ready.
Alternatively leave the original rules in place and copy the rules, giving them new ids, and with the addition of the ctl:ruleEngine=on action.

Yocto: how to remove/blacklist some dependency from RDEPENDS of a package?

I have a custom machine layer based on https://github.com/jumpnow/meta-wandboard.
I've upgraded the kernel to 4.8.6 and want to add X11 to the image.
I'm modifying to image recipe (console-image.bb).
Since wandboard is based on i.MX6, I want to include the xf86-video-imxfb-vivante package from meta-fsl-arm.
However, it fails complaining about inability to build kernel-module-imx-gpu-viv. I believe that happens because xf86-video-imxfb-vivante DEPENDS on imx-gpu-viv which in turn RDEPENDS on kernel-module-imx-gpu-viv.
I realize that those dependencies have been created with meta-fsl-arm BSP and vanilla Poky distribution. But those things are way outdated for wandboard, hence I am using the custom machine layer with modern kernel.
The kernel is configured to include the Vivante DRM module and I really don't want the kernel-module-imx-gpu-viv package to be built.
Is there a way to exclude it from RDEPENDS? Can I somehow swear my health to the build system that I will take care of this specific run-time dependency myself?
I have tried blacklisting 'kernel-module-imx-gpu-viv' setting PNBLACKLIST[kernel-module-imx-gpu-viv] in my local.conf, but that's just a part of a solution. It helps avoid build failures, but the packaging process still fails.
IIUC you problem comes from these lines in img-gpu-viv recipe:
FILES_libgal-mx6 = "${libdir}/libGAL${SOLIBS} ${libdir}/libGAL_egl${SOLIBS}"
FILES_libgal-mx6-dev = "${libdir}/libGAL${SOLIBSDEV} ${includedir}/HAL"
RDEPENDS_libgal-mx6 += "kernel-module-imx-gpu-viv"
INSANE_SKIP_libgal-mx6 += "build-deps"
I would actually qualify this RDEPENDS as a bug, usually kernel module dependencies are specified as RRECOMMENDS because most modules can be compiled into the kernel thus making no separate package at all while still providing the functionality. But that's another issue.
There are several ways to fix this problem, the first general route is to tweak RDEPENDS for the package. It's just a bitbake variable, so you can either assign it some other value or remove some portion of it. In the first case it's going to look somewhat like this:
RDEPENDS_libgal-mx6 = ""
In the second one:
RDEPENDS_libgal-mx6_remove = "kernel-module-imx-gpu-viv"
Obviously, these two options have different implications for your present and future work. In general I would opt for the softer one which is the second, because it has less potential for breakage when you're to update meta-fsl-arm layer, which can change imx-gpu-viv recipe in any kind of way. But when you're overriding some more complex recipe with big lists in variables and you're modifying it heavily (not just removing a thing or two) it might be easier to maintain it with full hard assignment of variables.
Now there is also a question of where to do this variable mangling. The main option is .bbappend in your layer, that's what appends are made for, but you can also do this from your distro configuration (if you're maintaining your own distro it might be easier to have all these tweaks in one place, rather than sprayed into numerous appends) or from your local.conf (which is a nice place to quickly try it out, but probably not something to use in longer term). I usually use .bbappend.
But there is also a completely different approach to this problem, rather than fixing package dependencies you can also fix what some other package provides. If for example you have a kernel configured to have imx-gpu-viv module built right into the main zimage you can do
RPROVIDES_kernel-image += "kernel-module-imx-gpu-viv"
(also in .bbappend, distro configuration or local.conf) and that's it.
In any case your approach to fixing this problem should reflect the difference between your setup and recipe assumptions. If you do have the module, but in a different package, then go for RPROVIDES, if you have some other module providing the same functionality to libgal-mx6 package then fix libgal-mx6 dependencies (and it's better to fix them, meaning not only drop something you don't need, but also add things that are relevant for your setup).

Schemas of rest APIs

I am new to using REST APIs but I have seen that many systems which expose REST APIs do not provide the schema (structure) of the message that would be returned. Because of this, we cannot be sure on how exactly to parse the response. For example, an API call today might return a particular set of fields, and later some day, they might want to add another field in the response which might mess up my parser. So, If I were to parse a reply message of a particular call, how can I do it the right way? How do the systems which expose these APIs process the calls internally?
That is why versioning was invented, if something major changed, a responsible provider will have a different version number for it.
This can happen to almost any library you use not just REST APIs.
Usual convention is to have your version numbering in the format Major.minor.patch ( see http://semver.org/ ).
Anything that breaks backward compatibility should have a major version number. Of course not all maintainers follow this , follow it perfectly, if your provider(s) from whom you consume the API is poor in this you cannot do much.
What you can do is ensure you are implementing and managing your dependency versioning for all libraries including your API's well, know which version you wish to support use explict version based calls
instead of
http://example.com/api/customers/1234
use
http://example.com/api/v3.0/customers/1234
and keep track of dependencies for your project, also keep track of the documentation of the provider for updates, see if this will affect you in anway
Regarding API's being in schemaless formats while new fields may not create problems, removing existing ones, or changing presentation, logic etc will break your code in ways difficult to debug, the changes maybe subtle.
Once a provider i was using changed field to all lowercase and didn't inform me, my application was expecting unique values for the field, but was getting similar values in some rare cases, it took me days to figure out where and what happened,
Many REST services uses JSON as a base format for encoding data. Clients should simply ignore properties they do not know in the JSON response, thereby allowing the server to evolve its output over time (given that the server doesn't rename or delete properties). REST service versioning is a whole discussion on its own (try searching for it).
So in short - new fields should not mess up your parser, it should simply ignore them.
Well, most good ones do! see for e.g - OpenStack REST APIs. Also, please note that the response being returned is for that particular version of the API, i.e, if you use the
API (say):
/api/v1.0/interfaces/interface
and, you might get some fields like:
{ "name : "eth0", "ip" : "10.1.1.1" }
Now, the API might evolve, and there could be a 1.1 version, which adds a few fields (retaining backward compatibility). I.e, /api/v1.1/interfaces/interface might return:
{ "name : "eth0", "ip" : "10.1.1.1" , "mtu" : 1500 }
or a 2.0 which has a completely different structure, breaking backward compatibility:
GET /api/v2.0/interfaces/interface
...
{ "interface" : { "name : "eth0", "ip" : "10.1.1.1" , "mtu" : 1500 }}
again, all these have to be well documented!
And, the code using the APIs, must be aware of the versions they are interested in, and parse the response accordingly.
The format of the response, BTW depends on the kind of request you make, for e.g, if you se the Accept-encoding to JSON (Accept: application/json), the returned response must be in JSON - so you know which parser to use.
HTH