XSS, CSRF, Clickjacking, Rate limit vulnerability fix in Dspace ver 6.0 - csrf

Pl advise how to fix the following vulnerabilities in D Space Version 6.0
Stored XSS - Cross-site scripting (also known as XSS) is a web security vulnerability which occurs when a malicious script is injected directly into a vulnerable web application cause of input validation.
Reflected XSS -- Reflected XSS is one of the part of Cross-Site-Scripting attacks and termed as “Non-Persistence XSS” or “Type II”.
3 Rate Limiting- number of wrong login attempts to be limited to 3 & then 15 mins wait
CSRF- cross site request forgery-
Click jacking -
Thanks
Rajiv

Related

Invoking a script as part of a web api method : how bad an idea is it?

I have a powershell script (but I think these considerations could be extended to any script that requires a runtime to interpret and execute it) that does what I also need to expose to a web application front end as a REST API to be called and I've been asked to call directly the script itself from the web method but although technically feasible, having a web api method that starts a shell/process to execute the script and redirecting stdin/stdout/stderr looks like a very bad practice to me. Is there any specific security risk in doing something like this?
Reading this question brings to mind how many of the OWASP Top Ten Security Vulnerabilities it would expose your site to.
Injection Flaws - This is definitely a high risk. There are ways to remediate it, of course. Parameterizing all input with strongly-typed dates and numbers instead of strings is one method that can be used, but it may not fit with your business case. You should never allow user-provided code to be executed, but if you are accepting strings as input and running a script against that input, it becomes very difficult to prevent arbitrary code execution.
Broken Authentication - possibly vulnerable. If you force a user to authenticate before reaching your script (you probably should), there is a chance that the user reuses their credentials elsewhere and exposes those credentials to a brute force attack. Do you lock out accounts after too many tries? Do you have two-factor authentication? Do you allow weak passwords? These are all considerations when you introduce a new authentication mechanism.
Sensitive data exposure - likely vulnerable, depending on your script. Does the script allow reading files and returning their contents? If not now, will it do so in the future? Even if it's never designed to do so, combined with other exploits the script might be able to read a file from a path that's outside the web directory. It's very difficult to prevent directory traversal exploits that would allow a malicious user access to your server, or even the entire network. Compiled code and the web server prevent this in many cases.
XML External Entities - possibly vulnerable, depending on your requirements. If you allow user-provided XML, the bad guy can inject other files and create havoc. This is easier to trap when you're using standard web tools.
Broken Access Control - definitely vulnerable. A Web API application can enforce user controls, and set permission levels in a C# controller. Exceptions are handled with HTTP status codes that indicate the request was not allowed. In contrast, Powershell executes within the security context of the logged in user, and allows system-level changes even if not running escalated. If an injection flaw is exploited, the code would be executed in the web server's security context, not the user's. You may be surprised how much the IIS_USER (or other Application Pool service account) can do. For one, if the bad guy is executing in the context of a service account, they might be able to bring down your whole site with a single request by locking out that account or changing the password - a task that's much easier with a Powershell script than with compiled C# code.
Security Misconfiguration - likely vulnerable. A running script would require it's own security configuration outside whatever framework you are using for the Web API. Are you ready to re-implement something like OAuth Claims or ACLs?
Cross-Site Scripting - likely vulnerable. Are you echoing the script output? If you're not sanitizing input and output, the script could echo some Javascript that sends a user's cookie content to a malicious server, giving them access to all the user's resources. Cross site request forgery is also a risk if input is not validated.
Insecure Deserialization - Probably not vulnerable.
Using Components with Known Vulnerabilities - greatly increased vulnerability, compared to compiled. Powershell grants access to a whole set of libraries that would otherwise need explicit references in a compiled application.
Insufficient Logging & Monitoring - likely vulnerable. IIS logs requests by default, but Powershell doesn't log anything unless you explicitly write to a file or start a transcript. Neither method is designed for concurrency and may introduce performance or functional problems for shared files.
In short, 9 out of the top 10 Vulnerabilities may affect this implementation. I would hope that would be enough to prevent you making your script public, at the very least. Basically the problem is that you're using the tool (Powershell) for a purpose it wasn't intended to fulfill.

OroCRM makes my hosting server slow down?

I am using Magento 1.9 and configured OroCRM, most of time my site down or keep loading 1 hour because OroCRM takes server resource,
10-0 10440 0/1/79166 W 0.00 7 0 147924407 0.0 0.04 2121.58 66.248.202.18 http/1.1 abc.in:443 POST /index.php/api/v2_soap/index/ HTTP/1.1
FYI here W means Sending Reply.
How to solve the issue?
There are no general recommendations for performance optimization as it depends on many factors, like the amount of data in a database, code customizations, number of visitors, list of features you are using the most and so on.
First, make sure the webserver meets OroCRM system requirements. These requirements are applied only to the OroCRM application, so if you have multiple applications configured on the same server, you should increase resources accordingly.
If requirements are met, the next step would be increasing available resources. Most of the time it's the easiest way to solve the performance issue.
When it doesn't help you can go with performance profiling tools, like checking SQL server slow logs, using blackfire.io to profile certain requests that are slow and so on.

How appropriate it is to use SAML_login with AEM with more than 1m users?

I am investigating a slow login time and some profile synchronisation problems of a large enterprise AEM project. The system has around 1.5m users. And the website is served by 10 publishers.
The way this project is built, is that they have enabled the SAML_login for all these end-users and there is a third party IDP which I assume SAML_login talks to. I'm no expert on this SSO - SAML_login processes, so I'm trying to understand if this is the correct way to go at the first step.
Because of this setup and the number of users, SAML_login call takes 15 seconds on avarage. This is getting unacceptable day by day as the user count rises. And even more importantly, the synchronization between the 10 publishers are failing occasionally, hence some of the users sometimes can't use the system as they are expected to.
Because the users are stored in the JCR for SAML_login, you cannot even go and check the home/users folder from crx browser. It times out as it is impossible to show 1.5m rows at once. And my educated guess is, that's why the SAML_login call is taking so long.
I've come accross with articles that tells how to setup SAML_login on AEM, and this makes it sound legal for what it is used in this case. But in my opinion this is the worst setup ever as JCR is not a well designed quick access data store for this kind of usage scenarios.
My understanding so far is that this approach might work well but with only limited number of users, but with this many of users, it is an inapplicable solution approach. So my first question would be: Am I right? :)
If I'm not right, there is certainly a bottleneck somewhere which I'm not aware of yet, what can be that bottleneck to improve upon?
The AEM SAML Authentication handler has some performance limitations with a default configuration. When your browser does an HTTP POST request to AEM under /saml_login it includes a base 64 encoded "SAMLResponse" request parameter. AEM directly processes that response and does not contact any external systems.
Even though the SAML response is processed on AEM itself, the bottle-necks of the /saml_login call are the following:
Initial login where AEM creates the user node for the first time - you can look at creating the nodes ahead of time. You could write a script to create the SAML user nodes (under /home/users) in AEM ahead of time.
During each login when the session is first created - a token node is created under the user node under /home/users/.../{usernode}/.tokens - this can be avoided by enabling the encapsulated token feature.
Finally, the last bottle-neck occurs when it saves the SAMLResponse XML under the user node (for later use required for SAML-based logout). This can be avoided by not implementing SAML-based logout. The latest com.adobe.granite.auth.saml bundle supports turning off the saving of the SAML response. Service packs AEM 6.4.8 and AEM 6.5.4 include this feature. To enable this feature, set the OSGI configuration properties storeSAMLResponse=false and handleLogout=false and it would not store the SAML response.

How to balance REST api and Openedness to prevent data stealing

One of our web site is a common "Announce for free your apartment".
Revenues are directly associated to number of public usage and announces
registered (argument of our marketing department).
On the other side, REST pushes to maintain a clear api when designing your
api (argument of our software department) which is a data stealing
invitation to any competitors. In this view, the web server becomes
almost an intelligent database.
We clearly identified our problem, but have no idea how to resolve these
contraints. Any tips would help?
Throttle the calls to the data rich elements by IP to say 1000 per day (or triple what a normal user would use)
If you expose data then it can be stolen. And think about search elements that return large datasets even if they are instigated by javascript or forms - I personally have written trawlers that circumvent these issues.
You may also think (if data is that important) about decrypting it in the client based on keys and authentication sent from the server (but this only raises the bar not the ability to steal.
Add captcha/re-captcha for users who are scanning too quickly or too much.
In short:
As always only expose the minimum API to do the job (attack surface minimisation)
Log and throttle
Force sign in(?). This at least MAY put off some scanners
Use capthca mechanism for users you think may be bots trawling your data

Filemaker Web Publishing & Data Integrity

I was recently asked to try to solve a data integrity problem with a Filemaker database app that has been published to the Web.
This app collects job applicant data, through a series of views. There have been reports from a handful of users that during their experience using the app, they would see another applicant's data, while traversing through the application. It seems like these users all exceeded the session timeout threshold and then were revealed somebody else's data in the form.
I am looking at the JSESSIONID cookie that is being generated since that is the only link that I see between a browser session and the app. The JSESSIONID cookie is set to expire in the past and is of type "session"
The JSESSIONID values also seem incredibly similar; here are two JSESSIONIDS that I received when testing the app:
02442D0AA37DEF0512674E8C
02442D09A38288D712674E8E
Has anyone experienced a similar issue with Filemaker apps published to the web?
Is there anyplace else that I need to look besides at the way the JSESSIONID and Filemaker 11 relate? In other words, are there other known security vulnerabilities with the Filemaker Web Publishing engine that anyone is aware of?
With appreciation,
Slinky66
the JSESSIONID is set by Apache Tomcat. This software is bundled with FileMaker's Web Publishing Engine, but the session ID generation is not connected in any way with FileMaker.
I received notice from a Filemaker technical support member that there is a known, documented bug in Filemaker that is the cause of this issue. See these threads for more detailed information:
http://forums.filemaker.com/posts/0d29aeaea1
http://forums.filemaker.com/posts/ad61a7e781