AFAIK, TinyMCE is supposed to be self-sufficient XSS-wise, as its editor prevents anything that could be used for XSS.
However, all this is done client-side, and security depends entirely on the POST headers being clean thanks to TinyMCE.
What's stopping an attacker from making a custom HTTP request with tags in the POST HTTP headers?
Does everybody who uses TinyMCE also have extensive anti-XSS libraries on the server side to make sure this doesn't happen? Is there a way to make sure the input did indeed come from TinyMCE and not from custom POST headers?
Needless to say, just escaping everything with the likes of htmlspecialchars() isn't an option, as the entire point of TinyMCE is to let users input HTML formatted content.
You can't trust what's coming from the client side. An attacker could even modify TinyMCE to disable whatever you have added.
On the serverside you could use something like OWASP AntiSamy or HTMLPurifier, which allows you to specify which tags you allow (whitelisting tags and attributes).
Related
I'm using Keycloak to send a forgot password email, and from what I've read on their docs and the FreeMarker docs, it seems like I should be able to use HTML tags just fine. However, when I use them in the password-rest.ftl file, it renders the whole tag like so:
<p>Some Text</p>
instead of just showing: Some Text
I found this (https://issues.jboss.org/browse/KEYCLOAK-681) saying that Keycloak can only send plain text emails, and I just wanted to see if anyone knew for sure since I have found some stuff that looks like the HTML tags should be usable How do you block emails from appearing as links in FreeMarker?
Any advice or thoughts on this would be greatly appreciated.
There are two sub-directories containing templates for emails. They are called text and html. When you want HTML you need to edit templates located in html directory.
Keycloak itself sends emails as multi-part messages containing both plain-text and HTML versions - email client decides which one is displayed.
I am not familiar with Keycloak, but somehow you should set the Mime type of your e-mail to "text/html" (e.g. have a look at this Stack Overflow answer).
Which version of Keycloak are you using? Comparing the source code of 'FreeMarkerEmailProvider' in tag 1.2.0.Final and on branch 1.3.x lets me assume that Keycloak is able to handle text/html at least from version 1.3.x on.
But again: I am not familiar with Keycloak...
Trying to read in the pricing lists under pricing information tab:
urlread( ' http://www.cefconnect.com/Details/Summary.aspx?Ticker=KYE#pricing ' )
But in url '#pricing' doesn't help.
Any suggestions?
As already pointed out by Darin, it's no use adding #pricing to the URL. The web page uses client-side techniques to switch between tabs; not something that can be used by urlread.
Summary.aspx always returns all tabs together as one big page. CSS and JavaScript make it look like a collection of tabs, when opened in a web browser.
Use the developer toolbar of your web browser to inspect the web page. For example in Google Chrome, just right-click on the section you are interested in, and select 'inspect element'.
I don't know what you are going to do with the result of urlread, but you'll probably have to do some parsing to distill the information you need from the HTML clutter.
Please note Summary.aspx launches additional HTTP requests to retrieve additional data. Use the 'Network' tab of Chrome's developer toolbar to analyze that behavior. For example, the following request is made when you click 'GO' after adjusting the pricing history filter criteria.
http://www.cefconnect.com/Resources/TableData/?Type=PricingHistory&Cusip=48660P104¶m0=1M¶m1=06/06/2014
At first, this seems to complicate the whole matter, but it may actually be a great opportunity. You can call urlread with the URL above, and get some data in JSON format, which is far less cluttered than HTML. Adjust the parameters to get different data. I'm not sure what 48660P104 is; it might be an internal representation of KYE. You may want to use an initial HTTP request to Summary.aspx to retrieve that code; you'll notice the webpage is littered with URLs containing the same parameter Cusip.
The # character has a special meaning in an url. It represents the fragment identifier and the value following it is never sent to the server. Only client side javascript can access it. You will need to url encode the value if you want to send it to the server:
urlread( ' http://www.cefconnect.com/Details/Summary.aspx?Ticker=KYE%23pricing ' )
This also stands true for other special characters. You need to properly encode them.
I'm seeing many frameworks recently that have decided to "fake" PUT and DELETE requests in form submissions (not ajax). Like Ruby on Rails. They seem to be waiting for browsers to catch up. Are they waiting in vain?
Is this even slated to be implemented anywhere?
Browsers do support PUT and DELETE, but it's HTML that doesn't.
For example, a browser will initiate a PUT request via Javascript (AJAX), but not via HTML <form> submission.
This is because HTML 4.01 and the final W3C HTML 5.0 spec both say that the only HTTP methods that their form elements should allow are GET and POST.
There was much discussion about this during the development of HTML 5, and at one point they got added to HTML 5, only to be removed again. The reason the additional methods were removed from the HTML 5 spec is because HTML 4-level browsers could never support them (not being part of HTML at the time they were made); and there is no way to allow them to do so without a JavaScript shim; thus, you may as well use AJAX.
Web pages trying to use forms with method="PUT" or method="DELETE" would fall back to the default method, GET for all current browsers. This breaks the web applications' attempts to use appropriate methods in HTML forms for the intended action, and ends up giving a worse result — GET being used to delete things! (hello crawler. oh, whoops! there goes my database)
Changing the default method for HTML <form> elements to POST would help (IMO the default should have always been POST, ever since Moasic* debuted forms in 1993), but to change the default would take at least a decade to percolate through the installed base. So in two words: ‘because legacy’. :-(
To support current browsers, authors will have to fake it with an override. I recommend authors use the widely knowna, b _method argument by including <input type=hidden name=_method value=DELETE> in their HTML; switch the form method to POST (since the request is unsafe); then add recognition of _method on the server side, which should then do whatever's necessary to mutate the request and forward it on as if it were a real DELETE request.
Note also that, since web browsers are the ultimate HATEOAS client, they need to have a new state to be transferred to them for DELETE requests. existing APIs often return 204 No Content for such requests. You should instead send back a hypermedia response with links so that the user can progress their browser state.
Also see the answers to these similar/identical questions:
Why are there are no PUT and DELETE methods on HTML forms?
Are the PUT, DELETE, HEAD, etc methods available in most web browsers?
Using PUT method in HTML form
Do Browsers support PUT requests with multipart/form data
* Mosaic, created by Marc Andreessen, also introduced the compound mistake of the <img src=…> tag — it should have been <image source=…>fallback</image>.
I'm basically looking at a security problem between a parent page and an iframe with links to a third party.
I want to send a POST or a GET (doesn't matter which as I can control the other side) to the third party, but not expose any details within it (say a SID or a user token) and have it's HTML content (JS/HTML/Images) loaded into the iframe.
I've looked at server-side redirects, creating a proxy using webclinet/webresponse and am curious to whether there is a good way to do it.
Has anyone ever done this before, or think that the secrity is not possible? Hell, even if I'm barking up the wrong tree on how to solve this.
If anybody has any examples on this it would be greatly appreciated.
Cheers,
Jamie
[Edit] Was thinking I might need to add some more details.
Say I have a parent page: https://mycompany.com/ShowThirdParty.
This has an iframe in it at the moment which will have the content of another component (also owned by me, or another team more specifically)
Basically I'd like to send some credentials to content in the iframe in such a way that the external pages can't read it, the iframe is put into a modal (I've done that) and the iframe has the restricted content with the auhtentication almost seamless and invisible.
I currently have it working as a GET url generated dynamically via JS and then passed into the iframe src parameter, obviously that isn't secure.
I kind of want some kind of server side redirect across a full url, but I don't even think that's possible.
You could try using AJAX and load a PHP script (with any parameters to the script encoded/encrypted) to query the 3rd party page and load the response into the iframe. Not really sure how your code is setup but there should be a way.
It can also be done by POST Method (submit the data to iFrame using POST) as it is HTTPS so the data you send to iFrame is encryped.
I want to crawl a site with Greasemonkey and wonder if there is a better way to temporarily store values than with GM_setValue.
What I want to do is crawl my contacts in a social network and extract the Twitter URLs from their profile pages.
My current plan is to open each profile in it's own tab, so that it looks more like a normal browsing person (ie css, scrits and images will be loaded by the browser). Then store the Twitter URL with GM_setValue. Once all profile pages have been crawled, create a page using the stored values.
I am not so happy with the storage option, though. Maybe there is a better way?
I have considered inserting the user profiles into the current page so that I could all process them with the same script instance, but I am not sure if XMLHttpRequest looks indistignuishable from normal user initiated requests.
I've had a similar project where I needed to get a whole lot of (invoice line data) from a website, and export it into an accounting database.
You could create a .aspx (or PHP etc) back end, which processes POST data and stores it in a database.
Any data you want from a single page can be stored in a form (hidden using style properties if you want), using field names or id's to identify the data. Then all you need to do is make the form action an .aspx page and submit the form using javascript.
(Alternatively you could add a submit button to the page, allowing you to check the form values before submitting to the database).
I think you should first ask yourself why you want to use Greasemonkey for your particular problem. Greasemonkey was developed as a way to modify one's browsing experience -- not as a web spider. While you might be able to get Greasemonkey to do this using GM_setValue, I think you will find your solution to be kludgy and hard to develop. That, and it will require many manual steps (like opening all of those tabs, clearing the Greasemonkey variables between runs of your script, etc).
Does anything you are doing require the JavaScript on the page to be executed? If so, you may want to consider using Perl and WWW::Mechanize::Plugin::JavaScript. Otherwise, I would recommend that you do all of this in a simple Python script. You will want to take a look at the urllib2 module. For example, take a look at the following code (note that it uses cookielib to support cookies, which you will most likely need if your script requires you to be logged into a site):
import urllib2
import cookielib
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookielib.CookieJar()))
response = opener.open("http://twitter.com/someguy")
responseText = response.read()
Then you can do all of the processing you want using regular expressions.
Have you considered Google Gears? That would give you access to a local SQLite database which you can store large amounts of information in.
The reason for wanting Greasemonkey
is that the page to be crawled does
not really approve of robots.
Greasemonkey seemed like the easiest
way to make the crawler look
legitimate.
Actually tainting your crawler through the browser does not make it that more legitimate. You are still breaking the terms of use of the site! WWW::Mechanize for example is equally well suited to 'spoof' your User Agent String, but that and crawling is, if the site does not allow spiders/crawlers, illegal!
The reason for wanting Greasemonkey is that the page to be crawled does not really approve of robots. Greasemonkey seemed like the easiest way to make the crawler look legitimate.
I think this is the the hardest way imaginable to make a crawler look legitimate. Spoofing a web browser is trivially easy with some basic understanding of HTTP headers.
Also, some sites have heuristics that look for clients that behave like spiders, so simply making requests look like browser doesn't mean the won't know what you are doing.