Can I embed source files from GitHub on my web page other than Gists? - github

Context
You can create a Gist on GitHub and embed it on your web page: embedding Gists.
This is an example of a randomly chosen Gist: tap.groovy.
Question
Is embedding also possible with other code files from GitHub, for example with this randomly chosen C# file ICommand.cs which is not a Gist?

You can try https://emgithub.com, which does exactly what you want.
To embed the example file ICommand.cs in your question, you can just add "em" before "github.com" in the address bar, then press Enter.
Then you can get a script tag like this:
<script src="https://emgithub.com/embed-v2.js?target=https%3A%2F%2Fgithub.com%2Fdotnet%2Fcorefx%2Fblob%2Fmaster%2Fsrc%2FSystem.ObjectModel%2Fsrc%2FSystem%2FWindows%2FInput%2FICommand.cs&style=default&type=code&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on"></script>
Note if you simply click Run code snippet in StackOverflow, the copying button at top right corner may not work. Running it outside SO would work fine.
Unlike other websites that do similar work, EmGithub.com is a static site hosted on Github Pages. Fetching target files and highlighting are done on your browser.
Disclosure: I'm the developer of it :)

You can use https://gist-it.appspot.com/:
<script src="http://gist-it.appspot.com/https://github.com/dotnet/corefx/blob/master/src/System.ObjectModel/src/System/Windows/Input/ICommand.cs"></script>

There's a standard for embedding content from one website in another via a URL, called oEmbed. Unfortunately, GitHub is not a oEmbed provider, i.e. it doesn't support oEmbed for its URLs.
I found a proxy service, Oembed Proxy for GitHub,
which adds oEmbed support for GitHub's code URLs. You pass a GitHub URL as a parameter to the proxy's URL and a resulting URL can be be pasted in another website, assuming that website supports embedding oEmbed links.
Another obstacle is that not every website supports embedding oEmbed URLs. According to the proxy's documentation, notion is one website that supports them. I did some research and looks like it should be possible to add oEmbed support to e.g. wordpress or jekyll.
This answer provides a very limited solution, due to small adoption of oEmbed. I thought it would be worth to spread the word nonetheless.

Another possible service is https://github.com/finom/github-embed. It seems to be unmainted by now for about 2 years, but gist-it seems to be unmaintained for even 6 years. I've tried neither, though.

You can use gistYard
<iframe src="https://gistyard.piyushdev.xyz/emd.html?lang=&from=0&to=&code=https://raw.githubusercontent.com/dotnet/corefx/master/src/System.ObjectModel/src/System/Windows/Input/ICommand.cs&edit=true&dm=off" width="100%" height="330" frameborder="0"></iframe>
It provides features like changing theme , cutting code directly from raw , edit mode , custom styling and others.

Related

How to get larger favicon from Google's api?

Is it possible to get a larger version of the favicon from the Google's api or from somewhere else?
This is the url.
http://www.google.com/s2/favicons?domain=google.com
I searched for an alternative api on ProgrammableWeb and Google but many of them don't exist anymore and the one I found that actually seems to work isn't free. (http://grabicon.com/)
I need the icon for a VB.NET project that has a list of websites with icons. But 16x16 icons are too small for that.
looks like there is size parameter in google now.
https://www.google.com/s2/favicons?sz=64&domain_url=yahoo.com
Editted:
The below answer is no longer valid, but the code is freely available on github:
Github -> Favicons for all!
Original answer
You can also try Statvoo's Favicon API, e.g.
https://api.statvoo.com/favicon/?url=google.com
https://api.statvoo.com/favicon/?url=stackoverflow.com
etc..
They also have quite a few other API's you can use if you look around. Most of which are free and have been around for years.
Looks like Google has an size attribute too.
https://www.google.com/s2/favicons?sz=64&domain_url=https://stackoverflow.com/
Here's some Favicon Fetchers I have found
Free Favicon-Service by AllesEDV.at - https://f1.allesedv.com/stackoverflow.com
Google Favicon Snatcher - https://www.google.com/s2/favicons?domain=stackoverflow.com
Favicon Grabber - http://favicongrabber.com/api/grab/stackoverflow.com
For Favicon Grabber it will return as JSON list of icon URLs.
Alternatively you can load the main page of the site and figure it out from there: https://stackoverflow.com/a/1990487/
According to https://news.ycombinator.com/item?id=17190599:
Unless that endpoint can also return other resolutions, Favicon Kit
offers more: https://api.faviconkit.com/twitter.com/144
https://api.faviconkit.com/twitter.com/16
(Though, I will say, the URIs returned for Twitter and the image sizes
don't actually align in those cases. The first is actually 192ˣ192
pixels, and the second is 32ˣ32 pixels. That seems odd. Maybe they
should have endpoints like domain/large, domain/medium, domain/small?)
Favicons are specified either as part of the HTML page, the HTTP response to a request for a page, or simply by being hosted at a default location.
That's true for all sites. There's plenty of browser extensions that can help you figure out the favicons a page send, if you don't manage by hand. For example, right clicking in firefox, "Page Info", "Media", "sort by type"->"Icon" should show all icons that a browser can find. It's not usual to have Icons larger than 32x32, and google might not be an exception.
Also be aware that the .ico format can contain multiple Icon sizes that not all tools show. So saving that .ico on your computer and inspecting it with a tool known to deal with all sizes contained in a single file might help.
Last word of advice: You're dealing with the logo, the very core of their brand, of a multi-billion dollar company. You might want to check with their policy of using that logo in your project. Probably it's OK (for example, browser don't seem to get in trouble for having a google logo for their google search box), but I'd still take care not to raise the impression that you're association a product of your own making with their logo.

GitHub pages generator removing <video> tag

Context
I usually set up quick GitHub pages to document a few developments I do. They are usually very simple pages, which I generate from the repo settings using the Page Generator. I want to continue using this method, as moving to proper gh-pages with jekyll is too much of an overhead for something so simple.
Recently I came across a use case, where adding a simple 2 min video to the first section made a lot of sense. Not knowing any native markdown for HTML video I've decided to add the HTML code directly as I do in a lot of other situations:
<video width="640" height="400" controls preload>
<source src="https://github.my.company.com/Org/sample/blob/master/intro.mp4?raw=true"></source>
</video>
Problem
When I generate the page the tag is not there, which normally happens when the video tag is not supported. If I open the chrome console and edit the HTML directly, as expected, the video shows fine and I can play it, etc.
I can only assume that GitHub markdown engine, is removing the video tag because the context in which is running does not support video (headless, non-compatible agent, whatnot).
Since GitHub says it supports native HTML into page rendering, there's no specific markdown to say "DO NOT PARSE THIS AT ALL COSTS", leaving me without many options left.
Question
Has anyone come across this issue, and do you know if it's possible to have a video tag in a generated page without moving on to Jekyll?
As a quick solution to encounter this issue: you can convert your video into gif using any converter then insert it in your markdown ex:
## Website Overview
![alt_text](path_to_the_.gif)
You can delegate all the heavy job to a video hosting service.
Advantages are :
they do all the html video / flash fallback for you
they can serve proper encoding / bandwith depending on device / network
they have specialized CDN that ensure good delivery (? depends on carrier but
you cannot know)
Everybody in the industry delegates the pain of video management.
And the only code you have to add is something like this :
<iframe width="420" height="315" src="//www.youtube.com/embed/KgLfpnPdqZw" frameborder="0" allowfullscreen></iframe>

Google Content Experiments for whole part of the site

I want to run an A/B-test or an experiment for whole part of the site. For example on my /blog/ page, where one variation would have a newsletter form and other variation a free ebook download button.
The problem is that I have to use a full URL path for the experiments, for example /blog/2013/article/1?var=1 and /blog/2013/article/1?var=2 With this method I would need create a new experiment for each blog post. This is impossible.
Any tips on how to approach this?
It's possible, but the documentation is lacking.
When you choose your variation URLs, you need to use relative instead of http://. This let's you use query parameters to define the variations, instead of the full URL. In your example, you would define your original page as:
http: //www.example.com/blog/2013/article/1
and your variation URLs would be ?var=1, var=2, etc. using relative as the option in the dropdown (instead of http:// or https://).
Here's the not-so-clear documentation on using relative URLs for your variations:
https://support.google.com/analytics/answer/2664470?hl=en&ref_topic=1745208
One important thing to remember is that if you're doing it this way, you need to include the content experiment code on every "original" page.
There's also another way to have even more control over serving the variation pages and controlling the experiment using the Content Experiments JavScript API. This is a relatively new feature - you can see the developer documentation about this here:
https://developers.google.com/analytics/devguides/collection/gajs/experiments
I am not sure this is possible. You might look at a more robust yet simple to use tool like Visual Website Optimizer or Optimizely.

What's the shebang/hashbang (#!) in Facebook and new Twitter URLs for?

I've just noticed that the long, convoluted Facebook URLs that we're used to now look like this:
http://www.facebook.com/example.profile#!/pages/Another-Page/123456789012345
As far as I can recall, earlier this year it was just a normal URL-fragment-like string (starting with #), without the exclamation mark. But now it's a shebang or hashbang (#!), which I've previously only seen in shell scripts and Perl scripts.
The new Twitter URLs now also feature the #! symbols. A Twitter profile URL, for example, now looks like this:
http://twitter.com/#!/BoltClock
Does #! now play some special role in URLs, like for a certain Ajax framework or something since the new Facebook and Twitter interfaces are now largely Ajaxified?
Would using this in my URLs benefit my Web application in any way?
This technique is now deprecated.
This used to tell Google how to index the page.
https://developers.google.com/webmasters/ajax-crawling/
This technique has mostly been supplanted by the ability to use the JavaScript History API that was introduced alongside HTML5. For a URL like www.example.com/ajax.html#!key=value, Google will check the URL www.example.com/ajax.html?_escaped_fragment_=key=value to fetch a non-AJAX version of the contents.
The octothorpe/number-sign/hashmark has a special significance in an URL, it normally identifies the name of a section of a document. The precise term is that the text following the hash is the anchor portion of an URL. If you use Wikipedia, you will see that most pages have a table of contents and you can jump to sections within the document with an anchor, such as:
https://en.wikipedia.org/wiki/Alan_Turing#Early_computers_and_the_Turing_test
https://en.wikipedia.org/wiki/Alan_Turing identifies the page and Early_computers_and_the_Turing_test is the anchor. The reason that Facebook and other Javascript-driven applications (like my own Wood & Stones) use anchors is that they want to make pages bookmarkable (as suggested by a comment on that answer) or support the back button without reloading the entire page from the server.
In order to support bookmarking and the back button, you need to change the URL. However, if you change the page portion (with something like window.location = 'http://raganwald.com';) to a different URL or without specifying an anchor, the browser will load the entire page from the URL. Try this in Firebug or Safari's Javascript console. Load http://minimal-github.gilesb.com/raganwald. Now in the Javascript console, type:
window.location = 'http://minimal-github.gilesb.com/raganwald';
You will see the page refresh from the server. Now type:
window.location = 'http://minimal-github.gilesb.com/raganwald#try_this';
Aha! No page refresh! Type:
window.location = 'http://minimal-github.gilesb.com/raganwald#and_this';
Still no refresh. Use the back button to see that these URLs are in the browser history. The browser notices that we are on the same page but just changing the anchor, so it doesn't reload. Thanks to this behaviour, we can have a single Javascript application that appears to the browser to be on one 'page' but to have many bookmarkable sections that respect the back button. The application must change the anchor when a user enters different 'states', and likewise if a user uses the back button or a bookmark or a link to load the application with an anchor included, the application must restore the appropriate state.
So there you have it: Anchors provide Javascript programmers with a mechanism for making bookmarkable, indexable, and back-button-friendly applications. This technique has a name: It is a Single Page Interface.
p.s. There is a fourth benefit to this technique: Loading page content through AJAX and then injecting it into the current DOM can be much faster than loading a new page. In addition to the speed increase, further tricks like loading certain portions in the background can be performed under the programmer's control.
p.p.s. Given all of that, the 'bang' or exclamation mark is a further hint to Google's web crawler that the exact same page can be loaded from the server at a slightly different URL. See Ajax Crawling. Another technique is to make each link point to a server-accessible URL and then use unobtrusive Javascript to change it into an SPI with an anchor.
Here's the key link again: The Single Page Interface Manifesto
First of all: I'm the author of the The Single Page Interface Manifesto cited by raganwald
As raganwald has explained very well, the most important aspect of the Single Page Interface (SPI) approach used in FaceBook and Twitter is the use of hash # in URLs
The character ! is added only for Google purposes, this notation is a Google "standard" for crawling web sites intensive on AJAX (in the extreme Single Page Interface web sites). When Google's crawler finds an URL with #! it knows that an alternative conventional URL exists providing the same page "state" but in this case on load time.
In spite of #! combination is very interesting for SEO, is only supported by Google (as far I know), with some JavaScript tricks you can build SPI web sites SEO compatible for any web crawler (Yahoo, Bing...).
The SPI Manifesto and demos do not use Google's format of ! in hashes, this notation could be easily added and SPI crawling could be even easier (UPDATE: now ! notation is used and remains compatible with other search engines).
Take a look to this tutorial, is an example of a simple ItsNat SPI site but you can pick some ideas for other frameworks, this example is SEO compatible for any web crawler.
The hard problem is to generate any (or selected) "AJAX page state" as plain HTML for SEO, in ItsNat is very easy and automatic, the same site is in the same time SPI or page based for SEO (or when JavaScript is disabled for accessibility). With other web frameworks you can ever follow the double site approach, one site is SPI based and another page based for SEO, for instance Twitter uses this "double site" technique.
I would be very careful if you are considering adopting this hashbang convention.
Once you hashbang, you can’t go back. This is probably the stickiest issue. Ben’s post put forward the point that when pushState is more widely adopted then we can leave hashbangs behind and return to traditional URLs. Well, fact is, you can’t. Earlier I stated that URLs are forever, they get indexed and archived and generally kept around. To add to that, cool URLs don’t change. We don’t want to disconnect ourselves from all the valuable links to our content. If you’ve implemented hashbang URLs at any point then want to change them without breaking links the only way you can do it is by running some JavaScript on the root document of your domain. Forever. It’s in no way temporary, you are stuck with it.
You really want to use pushState instead of hashbangs, because making your URLs ugly and possibly broken -- forever -- is a colossal and permanent downside to hashbangs.
To have a good follow-up about all this, Twitter - one of the pioneers of hashbang URL's and single-page-interface - admitted that the hashbang system was slow in the long run and that they have actually started reversing the decision and returning to old-school links.
Article about this is here.
I always assumed the ! just indicated that the hash fragment that followed corresponded to a URL, with ! taking the place of the site root or domain. It could be anything, in theory, but it seems the Google AJAX Crawling API likes it this way.
The hash, of course, just indicates that no real page reload is occurring, so yes, it’s for AJAX purposes. Edit: Raganwald does a lovely job explaining this in more detail.

Browser Add-On/Extension and Browser Form data

Can someone point me to an article (or discuss here) that explains how an add-on/extension can read what a user has completed in a form in a browser so you can present data to them based on the search parameters?
An example would be the Sidestep extension that opens a sidebar when a user searches on an airline/travel site and presents them a Sidestep meta search based on the parameters used on the original airline/travel site.
Browser extensions are necessarily browser specific. I would look at the APIs for your target browser. Here's a thread on Firefox 3.0 extensions.
extension to what? your body?:)
If you're talking about a browser extension, then i'm pretty sure you are on the wrong way.
You could just search for forms in the current page, and based on the field names try to figure out what did the user searched for...
A js file, and an AJAX-call is all you need, and you could basically skip the ajax call also... but i generally prefer server-side processing, as the source code is more hidden this way.