Keycloak with mod-security - keycloak

I plan to use Keycloak as our primary login app, but before i move forward with deployment, i need to address one concern. The issue arose when i enabled mod-security on the Apache server. This resulted in several Keycloak screens and operations becoming blocked, including the ability to update the theme. if i disable the mod-security everything works fine
am i doing anything wrong or am i miss some kind of setting for mod-security in keycloak
kindly suggest some solution
I try to disable few rules, but there are too many and also for disabling rule, i need to provide some proper reason to do so.

OWASP ModSecurity Core Rule Set Dev on Duty here. Are you using the Core Rule Set (CRS)? Are those the rules you are having trouble with, or are you using some other rule set? Please confirm.
Assuming you are using CRS, have you tuned your WAF installation for your web application (Keycloak)? Tuning is a required step before CRS can be properly and correctly used in front of a web application. This is especially true if using a higher paranoia level, i.e. paranoia level 2 and above.
There are some great guides and documentation available online which cover the tuning process. The CRS false positives and tuning documentation is very good. There is also a popular series of tutorials on netnea.com which cover every step from the very beginning: compiling the ModSecurity WAF engine, installing CRS, tuning by writing rule exclusions, and more.

Related

What are Rulesets and RulesFile in Firestore securityRules? Why do we have multiple Rulesets deployed?

I am trying to simplify our firestore security rules and their deployment in different environments (prod / dev / test). I came across the recently added https://firebase.google.com/docs/reference/admin/node/admin.securityRules.html which looked like it would be very helpful to me, as I can programmatically create and deploy rules. However the concept of Ruleset and RulesFile is very confusing.
In particular, what does it mean to have multiple Rulesets deployed? I never knew this, but when I run admin.securityRules().listRulesetMetadata(), I see a list of over 100 such rulesets. Is it to support rollbacks? If so, how could one do that? Is the latest deployed Ruleset the one which is always enforced and it overrides everything from past? What is RulesFile then? Is there a case where there isn't a one to one mapping between RulesFile and Ruleset?
Some clarifications on these would be very helpful in determining if this is the correct solution for me.
Each time you change your security rules, you're indeed creating a new Ruleset. The latest deployed Ruleset is active.
So when you do a rollback to an older set of rules, Firebase reads that Ruleset and redeploys it.
Currently there is only one RulesFile per Ruleset at least as far as the main API surfaces go (CLI, Console). This may (and very likely will) change in the future, to allow more common powerful conventions (think include files, and standard libraries) to also be applied to security rules.

Cq5 dispatcher is it must or optional

We are getting lot of problems with dispatcher, As per CQ5 documentation dispatcher is cache and/or load balancing tool, so as per my analysis we can go with out dispatcher also,I am correct? I want to integrate Squid or varnish web cache with my apache, so want get shutdown the dispatcher, will it be a good option
Any views/help is appreciated.
Yes, it's perfectly possible to run a website without the Dispatcher in front. Your options would then seem to come down to:
No caching
Implementing a cache in front of the Publish instance (e.q. Squid/Varnish, as you mentioned; configuration required)
Integrate a caching solution in Java that you can apply to parts of your templates/components individually (development required)
Also, you'd need to check with Adobe what level of support they'd give you for any of the above solutions before undertaking them. If you like, you could post specific questions to SO around the problems you're facing with the Dispatcher and you may get some resolutions too.
I was told that you should use dispatcher servers for your publish instance, because it really helps the loading times. There also was a documentation with a table showing how much it affects the performance depending on the number of documents served.
To avoid caching problems, you can specify files, folders or file types which should never be cached. You can also specify caching behaviour in the source code of the pages. Also, making changes to content on your author instance triggers a flush on the dispatcher for the affected content, to make sure that no cached old version is beeing served.
Last but not least using an apache server also allows you to handle virtual hosts and rewrite rules easily.
Its a must.
If you are getting problems with dispatcher, this could be a sign that you are using the wrong platform for your development needs. Seeing as you are needing to revert to technologies that are not needed for AEM.

Execution plan information from drools

We are planning to use BRMS 5.3.1 in our projects and one use case popped up yesterday where business wanted to store what rules evaulated to TRUE and were eventually fired. this is so that these information can be used for analysis purposes at a later point. Does Drools provide an API(s) that could provide this information at runtime? If it does, what would be the performance impact of having such a feature enabled on production systems?
Appreciate your answers on this.
yeah you can add one of the AgendaListeners to the session to get which rules were activated and fired. The performance impact will depend on what you do inside that listener, but if you implement an async way (sending a jms message for example) to store the information provided by the listener everything will be good.
HTH

Read-access to SAP's DB directly?

We're an SME with SAP implemented. We're trying to use the transactional data in SAP to build another system in PHP for our trucking division for graphical reports, etc. This is because we don't have in-house expertise ABAP development and any SAP modifications are expensive.
Presently, I've managed to achieve our objectives with read-only access to our Quality DB2 server and any writes go to another DB2 server. We've found the CPU usage on the SELECT statements to be acceptable and the user is granted access only to specific tables/views.
SAP's Quality DB2 -> PHP -> Different DB2 client
Would like your opinion on whether it is safe to read from production the same way? Implementing all of this again via the RFC connector seems very painful. Master-Slave config is an option for us but again will involve external consultancy.
EDIT
Forgot to mention that our SAP guys don't want to build even reports for another 6-months - they want to leave the system intact. Which is why we're building this in PHP on the top.
If you don't have ABAP expertise, get it - it's not that hard, and you'll get a lot of stuff "for granted" (as in "provided by the platform") that you'll have to implement manually otherwise - like user authentication and authority management and software logistics (moving stuff from the development to the production repository). See these articles for a short (although biased) introduction. If you still need an external PHP application, fine - but you really should give ABAP a try first. For web applications, you might want to look into Web Dynpro ABAP. Using the IGS built'in chart engine with the BusinessGraphics element, you'll get a ton of the most custom chart types for free. You can also integrate PDF forms created with Adobe Livecycle Designer.
Second, while "any SAP modifications are expensive" might be a good approach, what you're suggesting isn't a modification. That's add-on development, and it's neither expensive nor more complex than any other programming language and/or environment out there. If you can't or don't want to implement your own application entirely using the existing infrastructure, at least use a decent interface - web services, RFC, whatever. From an ABAP point of view, RFC is always the easiest option, but you can use SOAP or REST as well, although you'll have to implement the latter manually. It's not that hard either.
NEVER EVER access the SAP database directly. Just don't. You'll have to implement all the constraints like client dependency or checks for validity dates and cancellation flags for yourself - that's hardly less complex than writing a decent interface, and it's prone to break every time the structure is changed. And if at some point you need to read some of the more complex contents like long texts, you're screwed - period. Not to mention that most internal or external auditors (if that happens to be an issue with your company and/or legal requirements) don't like direct database access to a system as critical as this one, which again can cause lots of trouble from people you really don't want to mess with. It's just not worth it.

Will major config changes discourage users from deploying code?

I'm beginning development on a solution that will plug into an existing application. It will be made available for public use.
I have the option of using a newer technology that promotes better architecture, flexibility, speed, etc... or sticking with existing technology that is tried and tested which the application already uses.
The downside of going with the newer technology is that a major change to an essential config file needs to be made to support it. If the change goes wrong the app would be out of service. Uninstall is also an issue as future custom code by other developers may require the newer tech and there's no way this can be determined.
How important is this issue in considering an approach?
Will significant config changes put users off deploying code, or cause problems for them later?
Edit:
Intentionally not going into specifics about technologies here to avoid the question from being siderailed.
Install/uninstall software can be provided but there is some complexity involved which may cause them to foul up on edge cases resulting in a dead app. (A backup of the original config would be a way to mitigate that.) Also see the issue about uninstall above where I essentially can't provide one.
Yes, in my experience, any large amount of work will make users think twice about deploying or upgrading.
It's your standard cost/benefit analysis done by businesses with just about every decision. Will the expected benefits more than outweigh the potential costs?
When we release updates to our software, there's almost always a major component that's there just to assist the users to migrate.
An example (modified enough to protect the guilty): we have a product which generates reports on system performance and other things. But the reports aren't that pretty and the software for viewing them is tied to a specific platform.
We've leveraged BIRT to give us intranet-based reporting that looks much nicer and only needs the client to have a web browser (not some fat client).
Very few customers made the switch until we provided a toolset that would take their standard reports and turn them into BIRT reports. Once we supplied that, customers started taking it seriously - the benefit hadn't changed, but the cost had gone right down.
You've given us no detail, so we can't answer with any specificity. But if your question is, will a significant portion of your potential userbase be deterred from using your product if they have to do significant setup work, then the answer is yes. I've seen this time and time again, with my own products and those that I've installed myself. When the only config change is an uninstall and reinstall. People don't like to do work.
You may want to devote more effort than you've considered so far to making the upgrade painless. Even if you're upgrading someone else's framework, you may find the effort worthwhile and reflected in an increased number of installs.
I have noticed that "power users" - developers, sysadmins, etc. - are willing to put up with more setup work.
I'm not sure what you mean by "major config change", but if you're talking about settings / configuration files, then I've been doing something like this:
An application always contains a default configuration which is useful for most users, and which can't be replaced. Instead, users can override one or more of the default settings in their own, separate configuration file. When a new (major) version is released, most users don't need to reconfigure anything: their own custom configurations are still taken from their own configuration file, and possibly required new parameters are taken from the new release's default settings.
It's obvious that most users don't want waste their time adjusting some settings that already were right - and quite rightfully so.