GitHub: How to get `utteranc.es` to work for website discussion - github

My website https://friendly.github.io/HistDataVis/ wants to use the seemingly light weight and useful discussion feature offered by the https://github.com/utterance app.
I believe I have installed it correctly in my repo, https://github.com/friendly/HistDataVis, but it does not appear on the site when built.
I'm stumped on how to determine what the problem is, or how to correct it. Can anyone help?
For reference, here is my setup:
The website is built in R Studio, using distill in rmarkdown.
I created utterances.html with the standard JS code recommended.
<script>
document.addEventListener("DOMContentLoaded", function () {
if (!/posts/.test(location.pathname)) {
return;
}
var script = document.createElement("script");
script.src = "https://utteranc.es/client.js";
script.setAttribute("repo", "friendly/HistDataVis");
script.setAttribute("issue-term", "og:title");
script.setAttribute("crossorigin", "anonymous");
script.setAttribute("label", "comments ??");
/* wait for article to load, append script to article element */
var observer = new MutationObserver(function (mutations, observer) {
var article = document.querySelector("d-article");
if (article) {
observer.disconnect();
/* HACK: article scroll */
article.setAttribute("style", "overflow-y: hidden");
article.appendChild(script);
}
});
observer.observe(document.body, { childList: true });
});
</script>
In one Rmd file, I use in_header to insert this into the generated HTML file:
---
title: "Discussion"
date: "`r Sys.Date()`"
output:
distill::distill_article:
toc: true
includes:
in_header: utterances.html
---
Also used this in my _site.yml file to apply to all Rmd files on the site.
On my GitHub account, I installed utterances under GitHub apps, and gave it repository access to the repo for this site.
Edit2
Following the solution suggested by #laymonage, I fixed the script. I now get the Comments section on my web page, but get an error, "utterances not installed" when I try to use it. Yet, utterances is installed, as I just checked.

This part of your code:
if (!/posts/.test(location.pathname)) {
return;
}
Prevents the rest of the script to load because it's always true.
The condition checks whether the value of location.pathname passes the regular expression test string posts and negates it (!). That means the condition is true if the location.pathname (the path name of the current URL, e.g. /HistDataVis/ for https://friendly.github.io/HistDataVis/) does not contain posts anywhere in the string. None of the pages on your website has posts in the pathname, so the script will end there.
It should work if you change /posts/ to /HistDataVis or just remove the if block altogether.
Alternatively, you can also try giscus, a similar project that uses GitHub Discussions instead of Issues. Someone already made a guide on how to use it with Distill. Disclaimer: I'm the developer of giscus.

Related

Detecting if browser extension popup is running on a tab that has content script

There a few similar questions, but none of them have really gotten at what I'm asking.
I have a browser action popup. In the popup, I want to display settings if you're on a page where the content script has been injected (i.e., any page that matches the matches key within the content_scripts in the `manifest).
If I'm on a page that doesn't match the content_scripts matches pattern (and so wasn't injected), I just want to display a generic message "this plugin activates when you're on so-and-so sites".
What is the cleanest way to do it, without adding any unnecessary permissions?
It seems like one option is sending a message to a content script in the active tab, and seeing if I get a reply, but that seems really.. hacky. I should be able to know just based on a regex if I'm on one of the domains that matches my content script.
I'm looking for something that works in both manifest v2 and v3, btw.
TL;DR;
What's the simplest way to display a "you're on a page that matches your content_script" or "you're not on a page that matches your content_script" in a browser_action popup?
I build chrome extensions full time for an agency and have had projects where I needed to do exactly what you're asking.
The solution can be implemented w/o any permissions whatsoever. I built mine locally with an empty array for permissions. (for mv3)
for popup.html just create 2 divs and have them default to display none.
<div id="unsupported" style="display: none;">Ooops! This is not a supported site.</div>
<div id="supported" style="display: none;">Wohoo! This is a supported site!!!!!</div>
for your script.js, wait till the popup loads then query the active tab in the current window and get that tab's ID to send a message directly to it. If the tab is supported with a content script, it will send a true response (see last code snippet). If it wasn't supported, it will be an 'undefined' response.
async function setUI() {
let tabData = await chrome.tabs.query({ active: true, currentWindow: true })
let tabId = tabData[0].id // tabs.query returns an array, but we filtered to active tab within current window which yields only 1 object in the array
chrome.tabs.sendMessage(tabId, {
'message': 'isSupported'
}, (response) => {
console.log(response)
// response will be true if the message was successfuly sent to the tab and "undefined" if the message was never received (i.e. not supported w/ your content script)
if (response) return showSupportedHTML()
// else
showUnsupportedHTML()
})
}
function showSupportedHTML() {
document.querySelector('#supported').style['display'] = ''
}
function showUnsupportedHTML() {
document.querySelector('#unsupported').style['display'] = ''
}
window.addEventListener('DOMContentLoaded', () => {
setUI()
})
Lastly, in your content script, add a message listener to receive the message 'isSupported' that comes in from your content script. If the content script receives that message, have it send a response back with 'true'.
chrome.runtime.onMessage.addListener(function (request, sender, sendResponse) {
if (request.message == 'isSupported') {
console.log('run')
sendResponse(true)
}
})
Now, this of course only works for manifest v3 because as far as I know you can't use chrome.tabs.query for mv2. However, I recommend this solution as I've implemented pretty much this exact same code in other projects for clients and it's never had any issues.
I could look into a solution for mv2, though using the "activeTab" permission would be the right way to do it, I believe. Now, if you really don't want to go that route then you could implement a rather hacky solution. For example, you could use window 'focus' and window 'blur' events to see when a user has entered or left a tab. Then set a local storage variable every time a user enters / leaves a supported page. The order of operations for blur and focus is always blur => focus. So, when the blur event occurs you set a local storage variable to false. However, if you leave a supported tab for another supported tab then the 'focus' event will trigger immediately afterwards so you can set that same storage variable back to true.
Now, your content script will load after the tab has been focused so you'll need to add a function for when the page loads. You can run something like document.hidden and if that returns true, do nothing because the user already left this tab. If it returns false, then the user is still on the tab and you can set your local storage variable to true.
When the user opens the popup, you'll check that local storage variable and if its true or false, you can set the UI accordingly.
Let me know if the mv2 solution made sense or sounds too hacky. Happy to look into it more! :)
edit: Here is the code for mv2, I tested it and it does work and without any permissions, other than storage which is not an invasive permission.
Script.js for the mv2 popup:
async function setUI() {
chrome.storage.local.get(['isSupported'], function (response) {
console.log(response['isSupported'])
// response will be true if the message was successfuly sent to the tab and "undefined" if the message was never received (i.e. not supported w/ your content script)
if (response['isSupported']) return showSupportedHTML()
// else
showUnsupportedHTML()
})
}
function showSupportedHTML() {
document.querySelector('#supported').style['display'] = ''
}
function showUnsupportedHTML() {
document.querySelector('#unsupported').style['display'] = ''
}
window.addEventListener('DOMContentLoaded', () => {
setUI()
})
code for the content script in mv2:
if (!document.hidden) chrome.storage.local.set({'isSupported': true})
window.addEventListener('blur', () => {
console.log('left site')
chrome.storage.local.set({'isSupported': false})
})
window.addEventListener('focus', () => {
console.log('entered site')
chrome.storage.local.set({'isSupported': true})
})
Let me know if you have any additional questions.
Disclaimer: I have no prior browser extension development experience and am just going by the docs. I might be spouting nonsense or giving an answer that is plainly against your requirements, but that would be out of ignorance and not malicious intent. If you find my answer problematic, comment, or cast a vote and move on.
According to MDN, the activeTab permission allows to read the active tab's Tab.url property. One solution could be to request that permission, and then use that API to get the active tab's URL, and then use the same regex from the manifest.json's matches property to test for a match, and then use that information to modify your extension's browser_action UI.
You should be able to read the matches property from the manifest file via the .runtime.getManifest() API. MDN docs, chrome docs.
Snippet to get active tab in a background script: tabs.query({active: true}). (link to MDN docs). A content script should instead use tabs.getCurrent and the Tab.active property of the resolved result.
If you don't want to request the activeTab permission, what you're suggesting with the message-passing between the browser_action scripts and the content scripts might be the right way to go, but I don't know for a fact. The tabs.onActivated event would probably be useful with this approach. Note that to send a message from a background script to a content script, you need to use tabs.sendMessage (MDN docs, chrome docs) instead of runtime.sendMessage.
Another possible (maybe?) approach would be to listen for the tab change in the content script and then send the notification message from the content script to the extension's background scripts via the onfocus event (or similar events), and runtime.sendMessage.
If you go with a messaging-related approach, you might want to put a condition in the content script to only do messaging if the content script is in the top frame of the tab (Ie. iframes don't do messaging), since only one frame of the tab really needs to do this kind of messaging when the active tab changes, and content scripts can be applied to all frames in a browsing context.
Of these possible solutions I can think of, I don't know which is best for you, since you want both minimal permission requirements and a simple/clean approach, and each seems to be a tradeoff.

How to customize addContentItemDialog to restrict files over 10mb upload in IBM Content Navigator

I am customizing ICN (IBM Content Navigator) 2.0.3 and my requirement is to restrict user to upload files over 10mb and only allowed files are .pdf or .docx.
I know I have to extend / customize the AddContentItemDialog but there is very less detail on exactly how to do it, or any video on it. I'd appreciate if someone could guide.
Thanks
I installed the development environment but I am not sure how to extend the AddContentItemDialog.
public void applicationInit(HttpServletRequest request,
PluginServiceCallbacks callbacks) throws Exception {
}
I want to also know how to roll out the changes to ICN.
This can be easily extended. I would suggest to read the ICN red book for the details on how to do it. But it is pretty standard code.
Regarding rollout the code to ICN, there are two ways:
- If you are using plugin: just replace the Jar file on the server location and restart WAS.
- If you are using EDS: you need to redeploy the web service and restart WAS.
Hope this helps.
thanks
Although there are many ways to do this, one way indeed is tot extend, or augment the AddContentItemDialog as you qouted. After looking at the (rather poor IBM documentation) i figured you could probably use the onAdd event/method
Dojo/Aspect#around allows you to do exactly that, example:
require(["dojo/aspect", "ecm/widget/dialog/AddContentItemDialog"], function(aspect, AddContentItemDialog) {
aspect.around(AddContentItemDialog.prototype, "onAdd", function advisor(original) {
return function around() {
var files = this.addContentItemGeneralPane.getFileInputFiles();
var containsInvalidFiles = dojo.some(files, function isInvalid(file) {
var fileName = file.name.toLowerCase();
var extensionOK = fileName.endsWith(".pdf") || fileName.endsWith(".docx");
var fileSizeOK = file.size <= 10 * 1024 * 1024;
return !(extensionOK && fileSizeOK);
});
if (containsInvalidFiles) {
alert("You can't add that :)");
}else{
original.apply(this, arguments);
}
}
});
});
Just make sure this code gets executed before the actual dialog is opened. The best way to achieve this, is by wrapping this code in a new plugin.
Now on creating/deploying plugins -> The easiest way is this wizard for Eclipse (see also a repackaged version for newer eclipse versions). Just create a new arbitrary plugin, and paste this javascript code in the generated .js file.
Additionally it might be good to note that you're only limiting "this specific dialog" to upload specific files. It would probably be a good idea to also create a requestFilter to limit all possible uses of the addContent api...

How to fix type error while using gulp-filter?

I am mimicking the code from John Papa's outstanding Pluralsight course on Gulp.
When I use the code as shown in John's course:
.pipe(jsFilter)
.pipe($.uglify())
.pipe(jsFilter.restore())
I get an error on the 3rd line of code:
TypeError: Object #<StreamFilter> has no method 'restore'
When I use the code as shown in the readme from gulp-filter
.pipe(jsFilter)
.pipe($.uglify())
.pipe(jsFilter.restore)
I get an error that it can't pipe to undefined.
Based on what I can find online, both of these patterns are working for others. Any clues as to why this might be happening?
Here is the whole task, if that helps and the console logging indicates that everything if fine until the filter restore call.
Here is the entire task if that helps:
gulp.task('build-dist', ['inject', 'templatecache'], function() {
log('Building the distribution files in the /dist folder');
var assets = $.useref.assets({searchPath: './'});
var templateCache = config.temp + config.templateCache.file;
var jsFilter = $.filter('**/*.js');
return gulp
.src(config.index)
.pipe($.plumber({errorHandler: onError}))
.pipe($.inject(gulp.src(templateCache, {read: false}), {
starttag: '<!-- inject:templates:js -->'
}))
.pipe(assets)
.pipe(jsFilter)
.pipe($.uglify())
.pipe(jsFilter.restore())
.pipe(assets.restore())
.pipe($.useref())
.pipe(gulp.dest(config.dist));
});
The way restore works has changed between the 2.x and 3.x release of gulp-filter.
It seems you're using the 3.x branch, so in order to use restore you'll have to set the restore option to true when defining the filter:
var jsFilter = $.filter('**/*.js', {restore: true});
Then you'll be able to do
.pipe(jsFilter.restore)
For more information, check out this section of the documentation for the latest version of gulp-filter:
https://github.com/sindresorhus/gulp-filter/tree/v3.0.1#restoring-filtered-files

writing a Jacada Interaction extension

I want to create an "extension" for a Jacada Interaction (to extend functionality), in my case to parse and assign the numerical part of serialNumber (a letter, followed by digits) to a numeric global ("system") variable, say serialNumeric. What I am lacking is the structure and syntax to make this work, including the way to reference interaction variables from within the extension.
Here is my failed attempt, with lines commented out to make it innocuous after failing; I think I removed "return page;" after crashing, whereupon it still crashed:
initExtensions("serialNumeric", function(app){
app.registerExtension("loaded", function(ctx, page) {
// Place your extension code here
//$('[data-refname="snum"]').val('serialNumber');
// snum = Number(substring(serialNumber,1))
});
});
Here is an example of one that works:
/**
* Description: Add swiping gestures to navigate the next/previous pages
*/
initExtensions("swipe", function(app) {
// Swipe gestures (mobile only)
app.registerExtension('pageRenderer', function(ctx, page) {
page.swipe(function(evt) {
(evt.swipestart.coords[0] - evt.swipestop.coords[0] > 0)
? app.nextButton.trigger('click')
: app.backButton.trigger('click')
});
return page;
});
});
After reading the comment below, I tried the following, unsuccessfully (the modified question variable is not written back to that variable). It rendered poorly in the comment section, so I am putting it here:
initExtensions("serialNumeric", function(app){
app.registerExtension("loaded", function(ctx, page) {
var sernum = new String($('[data-refname="enter"] input'));
var snumeric = new String(sernum.substr(1));
$('[data-refname="enter"] input').val(snumeric);
});
});
I would like to understand when this code will run: it seems logical that it would run when the variable is assigned. Thanks for any insight ~
In your case, you extend loaded event. You don't have to return the page from the extension like in your working example below.
The page argument contains the DOM of the page you have just loaded, the ctx argument contains the data of the page in JSON form. You can inspect the content of both arguments in the browser's inspection tools. I like Chrome. Press F12 on Windows or Shift+Ctrl+I on Mac.
The selector $('[data-refname="snum"] input') will get you the input field from the question with the name snum that you defined in the designer. You can then place the value in the input field with the value from the serialNumber variable.
$('[data-refname="snum"] input').val(serialNumber);
You can also read values in the same way.
You can't (at this point) access interaction variables in the extension, unless you place theses variables inside question fields.
Here is a simple example how to put your own value programmatically into a input field and cause it to read it into the model, so upon next it will be sent to the server. You are welcome to try more sophisticated selectors to accommodate for your own form.
initExtensions("sample", function(app){
app.registerExtension("loaded", function(ctx, page) {
// simple selector
var i = $('input');
// set new value
i.val('some new value');
// cause trigger so we can read into our model
i.trigger('change');
});
});

How do I customize wintersmith paginator?

I've been setting up a site with Wintersmith and am loving it for the most part, but I cannot wrap my head around some of the under-the-hood mechanics. I started with the "blog" skeleton that adds the paginator.coffee plugin.
The question requires some details, so up top, what I'm trying to accomplish:
Any files (markdown, html, json metadata) will be picked up either in /contents/article/<file> or /contents/articles/<subdir>/<file>
Output files are at /articles/YYYY/MM/DD/title-slug/
/blog.html lists all articles, paginated.
Files just under /contents (not in articles) are not treated as blog posts. Markdown and JSON metadata are still processed, but no permalinked URLs, not included in blog listings, file/directory structure is more directly copied over.
So, I solved #1 with this suggestion: How can I have articles in Wintersmith not in their own subdirectory? So far, great, and #3 is working -- the paginated listing includes all posts. #4 has not been an issue, it's the default behavior.
On #2 I found this solution: http://andrewphilipclark.com/2013/11/08/removing-the-boilerplate-from-wintersmith-blog-posts/ . As the author mentions, his solution was (sort of) subsequently incorporated into Wintersmith master, so I tried just setting the filenameTemplate accordingly. Unfortunately this applies to all content, not just that under /articles, so the rest of my site gets hosed (breaks #4). So then I tried the author's approach, adding a blogpost.coffee plugin using his code. This generates all the files out of /contents/articles into the correct permalink URLs, however the paginator now for some reason will no longer see files directly under /articles (point #1).
I've tried a lot of permutations and hacking. Tried changing the order of which plugin gets loaded first. Tried having PaginatorPage extend BlogpostPage instead of Page. Tried a lot of things. I finally realize, even after inspecting many of the core classes in Wintersmith source, that I do not understand what is happening.
Specifically, I cannot figure out how contents['articles']._.pages and .directories are set, which seems relevant. Nor do I understand what that underscore is.
Ultimately, Jade/CoffeeScript/Markdown are a great combo for minimizing coding and enhancing clarity except when you want to understand what's happening under the hood and you don't know these languages. It took me a bit to get the basics of Jade and CoffeeScript (Markdown is trivial of course) enough to follow what's happening. When I've had to dig into the wintersmith source, it gets deeper. I confess I'm also a node.js newbie, but I think the big issue here is just a magic framework. It would be helpful, for instance, if some of the core "plugins" were included in the skeleton site as opposed to buried in node_modules, just so curious hackers could see more quickly how things interconnect. More verbose docs would of course be helpful too. It's one thing to understand conceptually content trees, generators, views, templates, etc., but understanding the code flow and relations at runtime? I'm lost.
Any help is appreciated. As I said, I'm loving Wintersmith, just wish I could dispel magic.
Because coffee script is rubbish, this is extremely hard to do. However, if you want to, you can destroy the paginator.coffee and replace it with a simple javascript script that does a similar thing:
module.exports = function (env, callback) {
function Page() {
var rtn = new env.plugins.Page();
rtn.getFilename = function() {
return 'index.html';
},
rtn.getView = function() {
return function(env, locals, contents, templates, callback) {
var error = null;
var context = {};
env.utils.extend(context, locals);
var buffer = new Buffer(templates['index.jade'].fn(context));
callback(error, buffer);
};
};
return rtn;
};
/** Generates a custom index page */
function gen(contents, callback) {
var p = Page();
var pages = {'index.page': p};
var error = null;
callback(error, pages);
};
env.registerGenerator('magic', gen);
callback();
};
Notice that due to 'coffee script magic', there are a number of hoops to jump through here, such as making sure you return a buffer from getView(), and 'manually' overriding rather than using the obscure coffee script extension semantics.
Wintersmith is extremely picky about how it handles these functions. If callbacks are not invoked, for the returned value is not a Stream or Buffer, generated files will appear in the content summary, but not be rendered to disk during a build. Enable verbose logging and check of 'skipping foo' messages to detect this.