Browser Add-On/Extension and Browser Form data - forms

Can someone point me to an article (or discuss here) that explains how an add-on/extension can read what a user has completed in a form in a browser so you can present data to them based on the search parameters?
An example would be the Sidestep extension that opens a sidebar when a user searches on an airline/travel site and presents them a Sidestep meta search based on the parameters used on the original airline/travel site.

Browser extensions are necessarily browser specific. I would look at the APIs for your target browser. Here's a thread on Firefox 3.0 extensions.

extension to what? your body?:)
If you're talking about a browser extension, then i'm pretty sure you are on the wrong way.
You could just search for forms in the current page, and based on the field names try to figure out what did the user searched for...
A js file, and an AJAX-call is all you need, and you could basically skip the ajax call also... but i generally prefer server-side processing, as the source code is more hidden this way.

Related

History: avoiding hash ("#") character in URLs

We are using GWT and take advantage of History framework. Everything works fine in application, but some of our clients are trying to put hyperlinks to our application in their PowerPoint presentations. But there is known problem in PP2007 with hash signs ("#") in hyperlinks which makes them unusable.
So is there any way to change separator character used in URLs generated by GWT Hisory framework to something other than hash?
Or is it possible to intercept new URL generated by GWT history and modify it before browser's adress bar is updated with it?
I don't think you can/should change the hash sign. Mainly because this sign does not come from GWT but from HTTP specifications. You can read the part on hash fragments in this doc for a good explanation. The main point being that adding a # sign to a url will not cause a full browser refresh. This is why this sign is used for ajax and GWT's history.
If you still want to intercept new URLs, you should probably add a ValueChangeHandler to your History, and then use Window.Location.getHref() and Window.Location.assign() to change the URL. But that's like using History to do something it doesn't do, so you're better off implementing your own History management system.
See http://code.google.com/p/google-web-toolkit/issues/detail?id=7101 (there are links to sample code)
Basically, you can only do this in a browser that supports HTML5's pushState and onpopstate. This rules out Internet Explorer, and unfortunately those people using PowerPointer are likely to also use IE, so basically you're doomed.

How can I program a button on an Access form to link to a browser window that looks up multiple addresses on Google Maps?

My problem is very similar to the one posted here:
http://www.utteraccess.com/forum/Plotting-Addresses-Maps-t1968130.html
except that thread never found any solutions. Basically, I'm working on an Access form that has a datasheet as a subform. Upon clicking a button on the main form I'm trying to make it so that a browser window opens up and, using the address columns from the spreadsheet data in the subform, plot all the address markers listed. I've looked up a lot of ways to attempt this but I've yet to find a way that seems to work.
I'm not even sure if it's possible to plot multiple markers on Google Maps, but according to research (and after trying it myself) it seems like it isn't, although I don't want to rule it out entirely because I'm still not 100% sure. However I know both Google Earth and batchgeo.com do allow this. I still want to try and do this on Google Maps, but if that doesn't work I want to try to do it using batchgeo.com and if that still doesn't work, then Google Earth (I don't want to make the user download external software if possible).
If it helps, from what I've read API's seem like a useful tool, though I'm not sure how to apply it to an Access form, it seems more like a way to embed to already existing websites.
I'd really appreciate if someone could help me figure out how to approach this problem!
Maybe this would help?
http://ramblings.mcpher.com/Home/excelquirks/getmaps/mapmarkers
It is Excel but should be translatable.
Here is another example, this time using Access:
http://www.utteraccess.com/forum/Google-Maps-Multiple-Mar-t1973499.html
...from what I've read API's seem like a useful tool, though I'm not
sure how to apply it to an Access form, it seems more like a way to
embed to already existing websites.
You're right. There's no way, that I'm aware of, to embed a Google Maps object in a form (like an ActiveX control). Microsoft MapPoint is a software product that lets you do Map integration by way of an ActiveX control (no need to use HTML and/or javascript).
What I usually do on a project like you're working on is I get my HTML page working the way I want it to, outside and independent of MS Access. You should be able to program and test the HTML file locally without having to use an actual web server. Just use something like NotePad++ or Sublime Text Editor 2 to write your HTML and Javascript and then open the file in your browser to see if it works. I'm quite sure you'll need to use Javascript in your HTML page to make this work. That's what the Google Maps API is all about.
After you have your webpage working, then you will have to go into Access and write code to create that web page on the fly with the address data for the current data set. You can just write it out to the Windows Temp folder and then open your browser control that that web page.
Julian Knight's answer links to more specifics on how to create the HTML page on the fly. It looks like gobble-de-gook, mostly because it is. Outputting HTML/Javascript/CSS from VBA is far less than optimal. This is why you troubleshoot it outside of Access, as much as you can.

Browser plugin for cross-domain iframe communication

I would like to create a browser plugin/extension that would allow the browser to read contents of a cross-domain iframe. I understand that this isn't possible with javascript, but perhaps someone could point me in the right direction of how to create a plugin that users could install. A cross-browser solution would be ideal.
Specifically, I am creating helpful navigation utility, and I want to know the url of the iframe so that I can prevent the iframe from navigating to any questionable sites accidentally. I would also like to detect the size of the contents.
Thanks in advance.
Option 1: file_get_contents:
What you can try is to get the contents from the page by the PHP function file_get_contents, load the CSS files and get the contents and the size of the page.
Option 2: Headers:
You can start here: http://www.senocular.com/pub/adobe/crossdomain/policyfiles.html
See the "allow-access-from" section where you can allow domains to be accessed cross domain when they have specific headers.
Userscripts have cross-domain XMLHttpRequest, and they will even run on all browsers. They (or at least Kango's Content Scripts) have the ability to write and read stored values for cross-window communication.

How to show a User view in GWT app by typing in browser address bar

I have this gwt app which say, runs on http://mygwtapp.com/ (which is actually: http://mygwtapp.com/index.html)
The app host a database of users, queried by searching usernames using the search view and results are shown in the user results view. Pretty useful enough. However I need to bb add a way that user view can be viewed by just typing http://myapp.com/user123
I am thinking that the question I have here, the answer is a server side solution. However if there's a client side solution, please let me know.
One fellow here in StackOVerflow suggested that the format would be like this:
mygwtapp.com/index.html#user123
However the format is important to be like: http://myapp.com/user123
The 'something' in 'http://host/path#something' is a Fragment identifier. FIs have a specific feature: the page isn't reloaded if only FI part in URL changes, but they still take part in browser history.
FI's are a browser mechanism that GWT uses to create "pages", i.e. parts of GWT application that are bookmarkable and have history support.
You can try to use an URL without # (the FI separator), but then you will have a normal URL, that reloads the page with every change and it could not be (easily) a part of a normal GWT app.
mygwtapp.com/index.html#user123
That would be using the History mechanism (http://code.google.com/webtoolkit/doc/latest/DevGuideCodingBasicsHistory.html) which I would add is the recommended way of doing it.
However, if you insist on using something like http://myapp.com/user123, one of the possible ways is to have a servlet which accepts this request (you might have to switch to something like http://myapp.com/details?id=user123). The servlet will look up the DB and return your host html back. Before returning it will inject the required details as a Dictionary entry in the page (http://google-web-toolkit.googlecode.com/svn/javadoc/1.5/com/google/gwt/i18n/client/Dictionary.html) On the client you can read this data and display on the UI

What's the shebang/hashbang (#!) in Facebook and new Twitter URLs for?

I've just noticed that the long, convoluted Facebook URLs that we're used to now look like this:
http://www.facebook.com/example.profile#!/pages/Another-Page/123456789012345
As far as I can recall, earlier this year it was just a normal URL-fragment-like string (starting with #), without the exclamation mark. But now it's a shebang or hashbang (#!), which I've previously only seen in shell scripts and Perl scripts.
The new Twitter URLs now also feature the #! symbols. A Twitter profile URL, for example, now looks like this:
http://twitter.com/#!/BoltClock
Does #! now play some special role in URLs, like for a certain Ajax framework or something since the new Facebook and Twitter interfaces are now largely Ajaxified?
Would using this in my URLs benefit my Web application in any way?
This technique is now deprecated.
This used to tell Google how to index the page.
https://developers.google.com/webmasters/ajax-crawling/
This technique has mostly been supplanted by the ability to use the JavaScript History API that was introduced alongside HTML5. For a URL like www.example.com/ajax.html#!key=value, Google will check the URL www.example.com/ajax.html?_escaped_fragment_=key=value to fetch a non-AJAX version of the contents.
The octothorpe/number-sign/hashmark has a special significance in an URL, it normally identifies the name of a section of a document. The precise term is that the text following the hash is the anchor portion of an URL. If you use Wikipedia, you will see that most pages have a table of contents and you can jump to sections within the document with an anchor, such as:
https://en.wikipedia.org/wiki/Alan_Turing#Early_computers_and_the_Turing_test
https://en.wikipedia.org/wiki/Alan_Turing identifies the page and Early_computers_and_the_Turing_test is the anchor. The reason that Facebook and other Javascript-driven applications (like my own Wood & Stones) use anchors is that they want to make pages bookmarkable (as suggested by a comment on that answer) or support the back button without reloading the entire page from the server.
In order to support bookmarking and the back button, you need to change the URL. However, if you change the page portion (with something like window.location = 'http://raganwald.com';) to a different URL or without specifying an anchor, the browser will load the entire page from the URL. Try this in Firebug or Safari's Javascript console. Load http://minimal-github.gilesb.com/raganwald. Now in the Javascript console, type:
window.location = 'http://minimal-github.gilesb.com/raganwald';
You will see the page refresh from the server. Now type:
window.location = 'http://minimal-github.gilesb.com/raganwald#try_this';
Aha! No page refresh! Type:
window.location = 'http://minimal-github.gilesb.com/raganwald#and_this';
Still no refresh. Use the back button to see that these URLs are in the browser history. The browser notices that we are on the same page but just changing the anchor, so it doesn't reload. Thanks to this behaviour, we can have a single Javascript application that appears to the browser to be on one 'page' but to have many bookmarkable sections that respect the back button. The application must change the anchor when a user enters different 'states', and likewise if a user uses the back button or a bookmark or a link to load the application with an anchor included, the application must restore the appropriate state.
So there you have it: Anchors provide Javascript programmers with a mechanism for making bookmarkable, indexable, and back-button-friendly applications. This technique has a name: It is a Single Page Interface.
p.s. There is a fourth benefit to this technique: Loading page content through AJAX and then injecting it into the current DOM can be much faster than loading a new page. In addition to the speed increase, further tricks like loading certain portions in the background can be performed under the programmer's control.
p.p.s. Given all of that, the 'bang' or exclamation mark is a further hint to Google's web crawler that the exact same page can be loaded from the server at a slightly different URL. See Ajax Crawling. Another technique is to make each link point to a server-accessible URL and then use unobtrusive Javascript to change it into an SPI with an anchor.
Here's the key link again: The Single Page Interface Manifesto
First of all: I'm the author of the The Single Page Interface Manifesto cited by raganwald
As raganwald has explained very well, the most important aspect of the Single Page Interface (SPI) approach used in FaceBook and Twitter is the use of hash # in URLs
The character ! is added only for Google purposes, this notation is a Google "standard" for crawling web sites intensive on AJAX (in the extreme Single Page Interface web sites). When Google's crawler finds an URL with #! it knows that an alternative conventional URL exists providing the same page "state" but in this case on load time.
In spite of #! combination is very interesting for SEO, is only supported by Google (as far I know), with some JavaScript tricks you can build SPI web sites SEO compatible for any web crawler (Yahoo, Bing...).
The SPI Manifesto and demos do not use Google's format of ! in hashes, this notation could be easily added and SPI crawling could be even easier (UPDATE: now ! notation is used and remains compatible with other search engines).
Take a look to this tutorial, is an example of a simple ItsNat SPI site but you can pick some ideas for other frameworks, this example is SEO compatible for any web crawler.
The hard problem is to generate any (or selected) "AJAX page state" as plain HTML for SEO, in ItsNat is very easy and automatic, the same site is in the same time SPI or page based for SEO (or when JavaScript is disabled for accessibility). With other web frameworks you can ever follow the double site approach, one site is SPI based and another page based for SEO, for instance Twitter uses this "double site" technique.
I would be very careful if you are considering adopting this hashbang convention.
Once you hashbang, you can’t go back. This is probably the stickiest issue. Ben’s post put forward the point that when pushState is more widely adopted then we can leave hashbangs behind and return to traditional URLs. Well, fact is, you can’t. Earlier I stated that URLs are forever, they get indexed and archived and generally kept around. To add to that, cool URLs don’t change. We don’t want to disconnect ourselves from all the valuable links to our content. If you’ve implemented hashbang URLs at any point then want to change them without breaking links the only way you can do it is by running some JavaScript on the root document of your domain. Forever. It’s in no way temporary, you are stuck with it.
You really want to use pushState instead of hashbangs, because making your URLs ugly and possibly broken -- forever -- is a colossal and permanent downside to hashbangs.
To have a good follow-up about all this, Twitter - one of the pioneers of hashbang URL's and single-page-interface - admitted that the hashbang system was slow in the long run and that they have actually started reversing the decision and returning to old-school links.
Article about this is here.
I always assumed the ! just indicated that the hash fragment that followed corresponded to a URL, with ! taking the place of the site root or domain. It could be anything, in theory, but it seems the Google AJAX Crawling API likes it this way.
The hash, of course, just indicates that no real page reload is occurring, so yes, it’s for AJAX purposes. Edit: Raganwald does a lovely job explaining this in more detail.