How to read DOM of the iframe loaded with a page from another domain? - dom

Is there a way to access the DOM of the document in an iframe from parent doc if the doc in the iframe is on another domain? I can easily access it if both parent and child pages are on the same domain, but I need to be able to do that when they are on different domains.
If not, maybe there is some other way to READ the contents of an iframe (one consideration was to create an ActiveX control, since this would be for internal corporate use only, but I would prefer it to be cross-browser compatible)?

Not really. This is essential for security – otherwise you could open my online banking site or webmail and mess with it.
You can loosen restriction a bit by setting document.domain, but still top level domain must be the same.
You can work around this limitation by proxying requests via your own server (but don't forget to secure it, otherwise s[cp]ammers may abuse it)
my.example.com/proxy?url=otherdomain.com/page

Theoretically you can access the the content of the iframe using the standard DOM level2 contentDocument property. Practically you may have found out that most browsers deny the access to the DOM of the external document due to security concerns.
Access to the full DOM AFAIK is not possible (though there might be some browser-specific tweak to disable the same-domain check); for cross-domain XHR a popular trick is to bounce the data back and forth the iframe and the main document using URL fragment identifiers (see e.g. this link), you can use the same technique but:
the document loaded in the iframe must cooperate, and
you don't have access to the full document anyway (you can read back some parameters, or maybe you can try and URL-encode the whole document - but that would be very ugly)

I just found postMessage method introduced with HTML5; it's already implemented in recent browser (FF3, IE8 and Safari 4). It allows the exchange of messages between any windows object inside the browser.
For the details see the documentation at MDC and this nice tutorial by John Resig.

Related

Can Squarespace connect to an external Json Rest-API...?

I am new to Squarespace and I was wondering if it can interact with an external Rest-API using JSON?
For example, say I have a Database being hosted privately and I want data from it to be shown via Squarespace and certain pages being restricted according to the user's privileges.
Is any of the above possible, and if so can you direct me to an example? I seem not to be able to find anything on the above via google.
Thanks
From Squarespace:
Squarespace doesn’t support server-side code, including PHP, Ruby, Ruby on Rails, and SQL.
Therefore, the only way to connect to an external API (besides those supported by Squarespace's official 'extensions') is to use "client-side" (in-browser) JavaScript.
So, the database solution which you use must be capable of securely handling client-side connections (for example, Firebase can do that). To interface with it, you must add the JavaScript to your Squarespace site via code block or code injection. An example explanation of doing that can be found at this question.
As to allowing/disallowing content based on data returned from the database, it can be done, but only client-side. That means that, while you can make the site appear to restrict access and/or make it inconvenient for others to access certain pages based on information in the database, because it is all client-side, it could technically be circumvented by someone if they are familiar with web-development, web-inspector, etc. So it's not something you'd want to do if it is critical that the content be truly restricted.
Squarespace does have its own "Members Areas" which can be used to solve content access problems. However, it's extremely limited at the moment, and there are many scenarios it does not address.

Can You Hack a Websites Server?

I had an idea about website vulnerabilities, and I would like to know if it is possible. Also some suggestions on how to fix them.
If some part of my website writes data to the DOM and then calls the data back from it, would it be possible for someone to “hack” the server by editing the DOM in the browser?
For example, suppose I have some radio buttons. Each button has its own logic associated with it. If I remove one of the buttons, but fail to remove or comment out the logic, could someone go in and edit the DOM name of one of the buttons to the removed one, and upon submission have the server execute the logic associated with the removed radio button?
I understand how to fix that situation, by removing or commenting out the removed button’s logic, but I fear my site relies too heavily on such things that could be manipulated via the DOM. Hence, I’m wondering:
Is such a thing possible?
Is some complex validation method the only way to prevent “hacks” of this nature?
The answer to your question is yes. For example in many browsers you can open a javascript console and change not only the DOM but also the javascript on the site.
There is no guarantee that the code you write for a webpage will be run as you code it. Any user can change their copy. What they should not be able to do is change other people's copy. When they do this is called a cross site scripting (XSS) attack. (Typically done by adding script to a field which is saved in a database server and then served to another user.)
To protect your site you need to ensure that all web service calls are secure -- that is a user can't call them with malicious data and cause problems.
You also need to block against SQL injection attacks.
There is NO way to protect against a user changing the web page on their machine and having it do something you did not intend, so all validation needs to occur both in the browser and on the server.
As an example of how easy it is to change the local browser behavior, consider the browser extension. A browser extension is a pre-coded way to change the way web pages act locally.
(Think about ad-blockers as a specific example.)

How to make DOM element allways visible even during reloading the page?

I wan to make some fixed divs on bottom persistent without reloading them if user is on my site. It is like Facebook chat, user can be all over the site but chat is allways visible?
This queston is because I have created chat with NodeJS and when page is refreshed connection is destroyed and again created, so I want to make this connection persistent even during the reloading the page.
I know possible soluton that make every request Ajax call, but... this is unusable....
You can try localstorage. Data will persist on reload.
Use that with json.stringify and json.parse to make it work.
Unfortunately some older browsers still in use won't like that. I think all new browser versions can use localstorage. If you're not concerned about old browsers that's fine too.
There are localstorage shims, but that depends on how much work you're willing to put in to your project.
You can use localstorage to just store the html too if you like.
I prefer dom element updates using json values myself.
Visit browser version market share and you'll see quite a few of the older versions are out of usage. Though like with any tech usage stats you should take them with a grain of salt.
Edit Whoops! You said fixed divs without reloading them. I don't think that's possible if they are a chat. Unless I don't understand the question. Post some code if you can.

How does Perl handle sessions differently from PHP?

I'm trying to clone a commercial Student Management System which was written in Perl. I want to use PHP, as I have no experience in Perl.
I am now trying to set up the login system, which should be (has to be?) done with PHPSESSID's, right? Now, in PHP I could transmit the Session ID via GET, POST, and COOKIE.
The Perl website doesn't add parameters to the URL (GET) and does not save cookies on my computer (COOKIE). There is also no form which could contain a hidden field (which would be POST in PHP, right?)
Can someone tell me how Perl remembers the logged in user there?
Perl takes a much more "toolkit"-based approach to building web applications than PHP does, because Perl was not designed specifically for web work. So it doesn't have any built-in way of doing web app session management; rather, there are many modules on CPAN which implement session management in many different ways.
If you were to identify the Student Management System in question and provide a URL, we might be able to look at it from the outside and identify what it's doing, but, really, I question whether you actually need to use the same session management system as the existing app unless you want to implement single-sign-on between the original version and your clone[1]. Concentrate on cloning the user-visible interface and functionality rather than the implementation details behind it.
[1] ...which would be futile anyhow unless you're also planning to tap into its session database on the back end, since neither one will recognize the other's session ids if they don't share that data.
For the sake of completeness, there are OTHER, non-standard ways to store/transmit session information than the 3 methods you listed (although I seriously doubt any of them are used in your SMS). Among them:
Sending the cookie data as part of the DOM (e.g. in HTML) and having on-page JavaScript access it from DOM
Or, just store the cookied data as JavaScript's data in the first place.
AJAX calls. E.g. the session-enabled logic is all handled in AJAX URLs and not the main URLs. Yes, I know that's completely screwy. But doable.
Don't store the cookie in main cookie database (so you can't find it using standard cookie viewing methods). For details on how that's done, please google "evercookie" for a VERY cool method of persistently storing cookie info by utilizing up to 10 redundant storage options - one good intro is http://blog.depthsecurity.com/2010/09/super-persistent-cookies-evercookie.html
All that said, I completely agree with Dave's answer - just use PHP's best practices mechanism to implement the functionality instead of cloning possibly-perl-specific and possibly-weird implementation in the webapp.

Adding pages "on the fly" with a CMS system

I am in the process of building a website content management system for one of my clients. It's a highly customized system, so I cannot use any "of the shelve" solution.
I need to allow my client to add pages to the website on the fly. I have two options here:
(1) Create a database driven page in the format of www.mycompany.com/page.aspx?catID=5&pageID=3 (query the database with the category and page ID's, grab the data and show it on the page) - or -
(2) Allow the management system to create static pages, something like www.mycompany.com/company/aboutus.aspx and www.mycompany.com/company/company_history.aspx , etc.
I believe that, while the former is much easier to implement, the latter is a better both for the user AND for Google.
My questions are (finally): (1) Would you agree that the latter is a better solution, and (2) What is the best way to implement such a solution? Should I create and update each file using the FileSystem (i.e. - the site's management system requires the user to supply a page/file name, page title and content, and creates the page on the fly based on these parameters)? Is there a better way?
Thank you!
It's entirely possible to have database driven pages with nice URLs. StackOverflow itself is a great example - this question's URL is http://stackoverflow.com/questions/1119274/adding-pages-on-the-fly-with-a-cms-system, but the page is built from the database, not static HTML.
I would use the first solution, but mask the addresses using a custom request handler. Basically, give each of your pages a unique string ID (such as about-us) and then, with your request handler that takes all requests, find this particular page in the database and render it.
See this article for some additional info (found it when googling for custom http handlers in ASP.NET.) In that article, it has the following handler added:
<add verb="*" path="*.piechart" type="PieChartHandler"/>
You would probably want to catch all paths (*), excluding certain media paths used for CSS, images and JavaScript.
More resources:
Custom HTTP Handler
HttpHandler in ASP.Net
I'd stay clear of static pages if I where you. Dynamic Data, MVC and some good planning should take you a long way!
What you need to do is to create some or many templates that each view/controller in mvc can use. Let whoever is responsible for the content handle it through dynamic data entities.
I would use the first idea, but work out a better URL scheme. If the system doesn't provide nice URLs (without ?), you'll have trouble getting the search engines to parse the whole site. Also using numbers instead of words make it hard on users to pass around URLs.
If you start to have performance problems you could add caching that would generate static pages from time to time. I would avoid doing that until you have to; caching can cause many headaches along the way to getting it right.
Although the existing advice is more-or-less sound, the commentators have failed to consider one factor which, admittedly, you haven't given much detail on. Are these pages that they'll edit once they're built, or a they one-shot creations? If the latter, your plan of generating static pages isn't quite so bad as they suggest. Why bother even having to think about database schemas and caching, when you can just serve flat content.
It will probably make for pretty lifeless, end-of-the-road pages, but if that's what you want ...