Is there any way to enable a content security policy without 'unsafe-inline' styles for react-beautiful-dnd? - hash

I have the content security policy blues when trying to use React-beautiful-dnd.
I keep on getting:
Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self', etc...
I have a MERN stack application using helmet for the CSP which relies heavily on react-beautiful-dnd or hello-pangea/dnd (basically same thing but works with react v18).
I have read in a separate stack overflow post that React apps cannot support nonces unless they serve static pages, and I am pretty confident if I attempt to convert this application to serve static pages it will not work as intended. It is a big app.
I have tried adding hashes to the CSP for every unsafe inline script used by the library, but it ends up being longer than the maximum permitted header size from the server and the server starts throwing 502 errors...
So I figure I have a few options left:
Magically find some way to use a nonce in a dynamic React app
Spend months recreating my own version of the React-beautiful-dnd library which doesn't rely on inline styles (and I guess put it up as a public package)
Find and use some other library for managing drag and drop lists
Can anyone more knowledgeable than me out there give a recommendation?
Thanks

You have pretty much listed the viable alternatives. The only missing one is allowing 'unsafe-inline', which isn't that bad if lock down the rest of your CSP properly, see https://scotthelme.co.uk/can-you-get-pwned-with-css/

Related

Fastly - error page detect and serve from specific server

We have two servers running with Fastly as the CDN and filtering which types of content get served by which build. In the Fastly code base, we have a .tl file and a .vcl file that contains all logic that defines which routes point to which server.
As for the code base for both servers, are all developed in Laravel.
Problem:
Is there a way to build a rule or logic of some kind that fastly forces error status to a specific server? If so, what would it look like?
By the way: my knowledge on Fastly is very limited, but I have advanced knowledge in JavaScript, PHP, regex.
I've put together a working example, using Fastly's Fiddle tool, based on my understanding of what you were trying to achieve:
https://fiddle.fastly.dev/fiddle/dd727e98
Here are some other resources available that might assist you:
VCL Examples (here's one that redirects URLs at the edge).
VCL Reference
If you have any further questions then I'd recommend reaching out to support#fastly.com who will be happy to help.
All the best.

Need help in identifying the difference between ESAPI.validator() and ESAPI.encoder()

We are implementing application security in our website. Its a REST based application, so i will have to validate the whole request payload, rather than each attribute. This payload need to be validated against all type of attacks (SQL,XSS etc). While browsing i found people are using ESAPI for web security.
I am confused between ESAPI.validator().getValidXXX, ESAPI.encoder() Java API's of ESAPI library. What is the difference between these two and when to use which API. I would also like to know in what cases we might use both API's
As per my understanding i could encode an input to form a valid html using both API's
Eg:
ESAPI.encoder().encodeForHTML(input);
ESAPI.validator().getValidSafeHTML(context, input, maxLength, allowNull).
For XSS attacks, I have made code changes to strip-of html tags using java pettern&matcher, but i would like to achieve the same using ESAPI. Can someone help me how to achieve it.
Or
Are there any new java plugins developed for websecurity similar to ESAPI which i did not come accross. I have found https://jsoup.org/, but it solves only XSS attacks, i am looking for a library which provides API's for several attacks (SQL injection/XSS)
ESAPI.encoder().encodeForHTML(input);
You use this when you're sending input to a browser, so that the data you're sending gets escaped for HTML. This can get tricky, because you have to know if that exact data is for example, being passed to javascript before it is being rendered into HTML. Or if it's being used as part of an HTML attribute.
We use:
ESAPI.validator().getValidSafeHTML(context, input, maxLength, allowNull).
when we want to get "safe" HTML from a client, that is backed by an antisamy policy file that describes exactly what kinds of HTML tags and HTML attributes we will accept from the user. The default is deny, so you have to explicitly tell policy file, if you will accept:
text
You need to specify that you want the "a" tag, and that you will allow an "href" attribute, and you can even specify further rules against the content within the text fields and tag attributes.
You only need "getValidSafeHTML" if your application needs to accept HTML content from the user... which is usually specious in most corporate applications. (Myspace used to allow this, and the result was the Samy worm.)
Generally, you use the validator API when content is coming into your application, and the encoder API when you direct content back to a user or a backend interpreter. AntiSamy isn't supported anymore, so if you need a "safe HTML" solution, use OWASP's HTML Sanitizer.
Are there any new java plugins developed for websecurity similar to
ESAPI which i did not come accross. I have found https://jsoup.org/,
but it solves only XSS attacks, i am looking for a library which
provides API's for several attacks (SQL injection/XSS)
The only other one that attempts a similar amount of security is HDIV. Here is an answer that compares HDIV to ESAPI by an HDIV developer.
*DISCLAIMER: I am an ESAPI developer, and OWASP member.
Sidenote: I discourage the use of Jsoup, because by default it mutates incoming data, constructing "best guess" (invalid) parse trees, and doesn't allow you fine-grained control of that behavior... meaning, if there's an instance where you want to override and mandate a particular kind of policy, Jsoup asserts that it is always smarter than you are... and that's simply not the case.

Scraping WebObjects website & REST

I need to programmatically interact with a WebObjects website and extract data from the responses. The particular WebObjects site I am scraping uses component actions and stores sessions in cookies (not urls). This means that all urls look something like this:
http://example.com/WOApp/WebObjects/WOApp.woa/wo/7.0.0.0.29.1.1.1
My first questions are:
Does urls like this not completely destroy local and shared caching opportunities (cachable constraint in REST)? I imaging the only effective caching with such urls is the WebObjects server itself.
Isn't addressability broken as well? Each resource does have a unique endpoint, but it changes constantly. Furthermore (I think) that WebObjects also makes too old URLs invalid since they "time-out" after a period of time. I'm not sure whether this applies only to urls with sessions though.
Regarding the scraping I am not sure whether it's possible to extract any meaningful endpoints from the website. For example, with a normal website I would look through the HTML and extract the POST urls, then use them in my scraper by posting directly to them instead of going through the normal request-response cycle.
In this case I obviously cannot use any URLs extracted from the HTML since they are dynamically generated on each request, but I read something about being able to access WebObjects components directly if the security settings have not been set to disallow this (see https://developer.apple.com/legacy/library/documentation/LegacyTechnologies/WebObjects/WebObjects_3.5/PDF/WebObjectsDevGuide.pdf, p. 53 "Limitations on Direct requests"). I don't understand exactly how to do this though or if it's even possible.
If it's not possible what would be a good approach then? The only options I can think of is:
Using a full-blown browser client to interact with the website (e.g. WatiR or Selenium) and extract & process the HTML from their responses
Manually extracting the dynamic end-points by first request the page where they are on and then find the place in the HTML where they're located. Then use them afterwards as if they were "static".
I am interested in opinions on how to approach this scenario since I don't believe any of the solutions above are particularly good.
You've asked a number of questions, and I'll see if I can cover each in turn.
Does urls like this not completely destroy local and shared caching
opportunities (cachable constraint in REST)? I imaging the only
effective caching with such urls is the WebObjects server itself.
There is, indeed, a page cache within the WebObjects application server, and you're right to observe that these component action URLs probably thwart any other kind of caching. Additionally, even though the session ID is not present in the URL, you'd need the session ID in the cookie to re-create the same page, so having just that URL would get you a session restoration error from the application server.
Isn't addressability broken as well? Each resource does have a unique
endpoint, but it changes constantly.
Well, yes, on the face of it this is true. You've given a component action URL as an example, and they're tied to the session.
Furthermore (I think) that
WebObjects also makes too old URLs invalid since they "time-out" after
a period of time. I'm not sure whether this applies only to urls with
sessions though.
Again, all true. Component action URLs generate sessions, and sessions time out.
At this point, let me take a quick diversion. I'm assuming you're not the owner of the WebObjects application—you're talking about having to scrape a WebObjects app, and you've identified some ways in which this particular app doesn't conform to REST principles. You're completely right—a fully component-action-based WebObjects application won't be RESTful. WebObjects pre-dates REST by a few years. Having said that, there are ways in which a WebObjects application can be completely RESTful:
Using session-less direct actions gives a degree of REST-like behaviour, and would certainly solve the problems you identify with caching, addressability and expiry.
Using the ERRest framework to create a 100% RESTful application.
Of course, none of this will help you if you're just trying to scrape a legacy application.
Regarding the scraping I am not sure whether it's possible to extract
any meaningful endpoints from the website. For example, with a normal
website I would look through the HTML and extract the POST urls, then
use them in my scraper by posting directly to them instead of going
through the normal request-response cycle.
Again, if it's a fully component action-based application, you're right—all those URLs will be dynamically generated and useless to you.
In this case I obviously cannot use any URLs extracted from the HTML
since they are dynamically generated on each request, but I read
something about being able to access WebObjects components directly if
the security settings have not been set to disallow this…
That's talking about getting a component to render directly from its template with some restrictions:
As you note, the application can easily prevent it from happening at all.
As mentioned on p.53, the user input and action-invocation phases of rendering the component are skipped, which probably means this approach would be limited to rendering a component that didn't have any dynamic content anyway. This might be of some very limited use to you, though you'd need to know the component names you were interested in, and they wouldn't normally be exposed anywhere.
I'm not sure you're going to find anything better than the types of high-level functional approaches you've already suggested above, such as automating at the browser level with Selenium. If what you need is REST-style direct addressability of resources within the application, you're not going to get that unless you can re-write the application to use direct actions or ERRest where you need them.
A little late, but could help.
I use the Apache's mod_ext_filter (little modified) to pre/post filter the requests/responses from our WebObjects application. The filter calls PHP scripts and can read the dynamical hyperrefs and other things from the HTML pages. The scripts can also modify the HTTP requests, so we can programatically add/remove parameters from the request to implement new workflows in front of the legacy app and cleanup the requests before they will reach WebObjects. It is also possible to handle an additional database within the scripts and store some things over multiple requests.
So you can get the dynamically created links (maybe a button's name or HTML form destination) and can recognize these names within the request.
It is also possible to "remote control" such applications with little scripts like "click on the third button on the page". The only thing you need is a DOM parser to get the structure of the HTML pages and then rebuild the actions which the browser would do (i.e. create the HTTP request manually and send it as POST to the extracted form destination href). The only problem is the Javascript code, which we analyze and reprogram within PHP (i.e. enable/disable input elements, so they will not be transmitted within the requests)
There were some problems within the WebObjects Adapter Module for Apache. It still uses Content-Length within the HTTP header, which you cannot change in mod_ext_filter. If you change the HTML or the parameters within the request, the length of the content will not longer match. But it is possible to change that.
Theoretically it could also be possible to control such an closed-source legacy application from a new UI on a tablet or smartphone, which delegates the user interaction to the backend WebObjects app.
The scripts depends on the page structure, so if your WebObjects app will be changed, you have to correct some things in the scripts (i.e. third button could be now the fourth button).
It should also be possible to add a Restful interface in front of the application and query the data from the legacy app by the filter scripts.

Graceful Degradation with REST in CakePHP

Alright, so a better title here may have been "Progressive Enhancement with REST in CakePHP", but at least now I'll know you didn't read the question if your answer just refers to the difference between the two ;)
I'm pretty familiar with REST and how to integrate it with CakePHP, but I'm not 100% on board with how to still maintain a conventionally functioning website. Using Router::mapResources sounds like a great idea, but this creates a problem with maintaining the "gracefully degradation" version of the site, because both POST requests to /resource/ AND GET requests for /resource/add will route to the same action (add). Clearly I'll want this action to return a JSON object if they're using the REST api, but if they're using the degraded version of the site (no JS perhaps), it should be a add form, right?
What's the best way to deal with this. Do you route your REST requests to other action names using Router::resourceMap()? Do you do that crazy hack I saw to have the /api/ prefix part of the resourceMap so you can use api_action functions? Do you have the actions handle both REST and conventional requests via checking isAjax()? If so, how do you ensure that you can rely on the browser to properly support the other two request types?
I've searched around quite a bit but haven't found anything about how to keep conventional requests available in Cake along side REST, so if anyone has any advice or experience, I'd love to hear it!
CakePHP uses extension routing as well, via Router::parseExtension() so;
/test/action will render views/test/action.ctp
/test/action.html also
/test/action.json will render views/test/json/action.ctp
/test/action.xml will render views/test/xml/action.ctp
If all views are designed to handle the same data as set by your controller, you'll be able to show a regular HTML form and handle the posted data the same way as you'd handle the AJAX request.
You'll probably might have to add checks if any data is posted/submitted inside the /add, /edit, /delete actions to prevent items being deleted without a form being posted (haven't tested that though, it might be that cake blocks these urls if mapresources is set for the controller)
REST in CakePHP:
http://book.cakephp.org/2.0/en/development/rest.html
(Extension) Routing
http://book.cakephp.org/2.0/en/development/routing.html#file-extensions

What is the best way of making a mobile version of a site in asp.net MVC2?

I've been thinking about this recently and I don't know a really nice and tidy way of creating a mobile version of an existing or new MVC2 website/app.
I think the easiest way would be to just use a different stylesheet depending on whether a mobile was detected but sometime you need to change the view content too if you have massive inline images everywhere or for other reasons.
What is a good approach for this? Is there a way of theming fairly easily in MVC2?
Well MVC is just your server-side technology, what you should ask to yourself is "what is the best practice to create a mobile web site, regardless of the server side tech".
In my opinion, creating a well-formed and semantic (x)html is the first step. As you say, the most logical thing to do is create different style sheets for different media types, and you're right.
As for the problems you mention, like inline images, consider this: are those images content or presentation?
In the first case, they should be present even in the mobile version.
In the latter, they are defined in the style sheet, so you can simply avoid them in the mobile css.
The only exception I can think of is when you want to provide different functionality on mobile, or if you're forced to, i.e. on pages that rely heavily on JS and those scripts wouldn't run on mobile browsers. In this case, you might want to create different versions of those pages and serve the appropriate version based on the user agent.
Check the source code for NerdDrinner. They've implementated a MobileCapableWebFormViewEngine class which inherits from base WebFormViewEngine class. The MobileCapableWebFormViewEngine uses the HTTPContext to decide which View to render in the client. This'll make more sense when you see the source code