I couldn't find any api that return the article in a usable HTML form. Most of them return extracts which have very poor HTML formatting which makes them useless for anything.
There is no way to tell what Facebook did exactly, but the easiest way to grab the HTML contents of an article is by using the render action, i.e. by appending action=render to the URL:
https://en.wikipedia.org/wiki/Cooking?action=render
This produces the exact same HTML you can see on Wikipedia, but omits the non-content part (sidebar etc). If you need to reproduce the layout of an article more faithfully, you need to reuse parts of Wikipedia's CSS, and there is no easy way to do that.
Since just a few days there is a REST API for getting the html. It is available at https://rest.wikimedia.org/
Since it is so new, Facebook is probably not using it (yet) but if you want to get it for yourself I suggest you start exploring there.
Related
I want to run an A/B-test or an experiment for whole part of the site. For example on my /blog/ page, where one variation would have a newsletter form and other variation a free ebook download button.
The problem is that I have to use a full URL path for the experiments, for example /blog/2013/article/1?var=1 and /blog/2013/article/1?var=2 With this method I would need create a new experiment for each blog post. This is impossible.
Any tips on how to approach this?
It's possible, but the documentation is lacking.
When you choose your variation URLs, you need to use relative instead of http://. This let's you use query parameters to define the variations, instead of the full URL. In your example, you would define your original page as:
http: //www.example.com/blog/2013/article/1
and your variation URLs would be ?var=1, var=2, etc. using relative as the option in the dropdown (instead of http:// or https://).
Here's the not-so-clear documentation on using relative URLs for your variations:
https://support.google.com/analytics/answer/2664470?hl=en&ref_topic=1745208
One important thing to remember is that if you're doing it this way, you need to include the content experiment code on every "original" page.
There's also another way to have even more control over serving the variation pages and controlling the experiment using the Content Experiments JavScript API. This is a relatively new feature - you can see the developer documentation about this here:
https://developers.google.com/analytics/devguides/collection/gajs/experiments
I am not sure this is possible. You might look at a more robust yet simple to use tool like Visual Website Optimizer or Optimizely.
I could not find a wicket tag like wicket:include? Can anyone suggest me anything? I want to include/inject raw source into html files? If there is no such utility, any suggestions to develop it?
update
i am looking for sth like jsp:include. this inclusion is expected to be handled on the server side.
To do this, you'll need to implement your own IComponentResolver.
This blog article shows an example somewhat resembling what you're after.
Is it raw markup that you want to include, or Wicket content?
If it's raw markup, even a simple Label can do that for you. If you call setEscapeModelStrings( false), the string value of the model will be copied straight in the markup. (Watch out for potential XSS attacks though.)
"Including" Wicket markup is done via Panels (or occasionally Fragments)
Update: If you add more detail about the actual problem you need to solve, there's a good chance that we can find a more "wickety" solution, after all, JSP and Wicket are two different worlds and the mindset of one doesn't work very well in the other.
What I am trying to do is to load a webpage into in a UIWebView. The problem is that I need to do some preprocessing on the html before displaying it in the web view.
The UIWebview loadHTMLString is quiet slow when the html is big. I don't need to display the full page therefore i am trying to remove some html nodes before displaying it in the web view to speed up the loading time.
I don't think using regex for that is a wise idea. I checked out NSXMLParser and TFHPPLE but I couldn't find any way to remove nodes from the html tree using an XPath or something.
I know I could do that using Javascript but that won't solve my problem. I also don't have no control on the website so I can't edit in the webpage itself.
Is there something as easy as deleteNodeUsingXPath or something :)
Cheers and thanks a lot for your help in advance.
One possibility solution: do a proxy website which strips out unwanted stuff. The iphone accesses the proxy website URL. The proxy website loads from the original website, strips out unwanted stuff, and replies with the remaining stuff.
There is a tool called Objective-C-HTML-Parser that will do what you are looking for. The documentation is thorough, and the implementation is pretty straight-forward.
Basically, you take your HTML string and make an HTMLParser object that you can then manipulate however you want. It is a very powerful library that basically lets you do whatever you want with HTML with easy-to-use Objective-C APIs.
Good luck!
I run a blog where the blog title is either an external link or an internal link to a longer piece similar to what you’ve seen on similar blogs. For some reason, ExpressionEngine (1.6.x) does nothing to sanitize such things as ampersands in the URLs provided.
I use Markdown in the body text, which seems to do a great job of sanitizing all URLs. Yet, ExpressionEngine’s own handling of the titles doesn’t cut it. I have tried formatting the “title URLs” in Markdown and failed miserable, and damn if I know what the hell it is in ExpressionEngine that prevents me from using it.
So the question boils down to what other ExpressionEngine 1.6.x users do and have done, or whether someone can come up with a MacGyver-esque solution. Because I’ve been stumped upwards of half a year.
The XML Encode Plugin for EE1 from Rick Ellis of EllisLab will convert your special characters to HTML entities.
The plugin was originally designed to convert reserved XML characters to HTML entities in the ExpressionEngine RSS templates, but should work for what you need.
To use the plugin, wrap your {title_link} custom field in between its tag pairs:
{exp:xml_encode}
{title_link}
{/exp:xml_encode}
This would result in:
http://www.google.com/search?q=nytimes&btnG=Google+Search
Being converting into:
http://www.google.com/search?q=nytimes&btnG=Google+Search
Other EE1 Plugins which offer more similar but advanced features are Cleaner by utilitEEs (Oliver Heine) or Low Replace by Lodewijk Schutte.
Can someone point me to an article (or discuss here) that explains how an add-on/extension can read what a user has completed in a form in a browser so you can present data to them based on the search parameters?
An example would be the Sidestep extension that opens a sidebar when a user searches on an airline/travel site and presents them a Sidestep meta search based on the parameters used on the original airline/travel site.
Browser extensions are necessarily browser specific. I would look at the APIs for your target browser. Here's a thread on Firefox 3.0 extensions.
extension to what? your body?:)
If you're talking about a browser extension, then i'm pretty sure you are on the wrong way.
You could just search for forms in the current page, and based on the field names try to figure out what did the user searched for...
A js file, and an AJAX-call is all you need, and you could basically skip the ajax call also... but i generally prefer server-side processing, as the source code is more hidden this way.