Chrome Devtools Coverage: how to save or capture code used code? - google-chrome-devtools

The Coverage tool is good at finding used and unused code. However, there doesn't appear to be a way to save or export only the used code. Even hiding unused code would be helpful.
I'm attempting to reduce the amount of Bootstrap CSS in my application; the file is more than 7000 lines. The only way to get just the used code is to carefully scroll thru the file, look for green sections, then copy that code to a new file. It's time-consuming and unreliable.
Is there a different way? Chrome 60 does not seem to have added this functionality.

You can do this with puppeteer
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage()
//Start sending raw DevTools Protocol commands are sent using `client.send()`
//First off enable the necessary "Domains" for the DevTools commands we care about
const client = await page.target().createCDPSession()
await client.send('Page.enable')
await client.send('DOM.enable')
await client.send('CSS.enable')
const inlineStylesheetIndex = new Set();
client.on('CSS.styleSheetAdded', stylesheet => {
const { header } = stylesheet
if (header.isInline || header.sourceURL === '' || header.sourceURL.startsWith('blob:')) {
inlineStylesheetIndex.add(header.styleSheetId);
}
});
//Start tracking CSS coverage
await client.send('CSS.startRuleUsageTracking')
await page.goto(`http://localhost`)
// const content = await page.content();
// console.log(content);
const rules = await client.send('CSS.takeCoverageDelta')
const usedRules = rules.coverage.filter(rule => {
return rule.used
})
const slices = [];
for (const usedRule of usedRules) {
// console.log(usedRule.styleSheetId)
if (inlineStylesheetIndex.has(usedRule.styleSheetId)) {
continue;
}
const stylesheet = await client.send('CSS.getStyleSheetText', {
styleSheetId: usedRule.styleSheetId
});
slices.push(stylesheet.text.slice(usedRule.startOffset, usedRule.endOffset));
}
console.log(slices.join(''));
await page.close();
await browser.close();
})();

You can do this with Headless Chrome and puppeteer:
In a new folder install puppeteer using npm (this will include Headless Chrome for you):
npm i puppeteer --save
Put the following in a file called csscoverage.js and change localhost to point to your website.
:
const puppeteer = require('puppeteer');
const util = require('util');
const fs = require("fs");
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.coverage.startCSSCoverage();
await page.goto('https://localhost'); // Change this
const css_coverage = await page.coverage.stopCSSCoverage();
console.log(util.inspect(css_coverage, { showHidden: false, depth: null }));
await browser.close();
let final_css_bytes = '';
let total_bytes = 0;
let used_bytes = 0;
for (const entry of css_coverage) {
final_css_bytes = "";
total_bytes += entry.text.length;
for (const range of entry.ranges) {
used_bytes += range.end - range.start - 1;
final_css_bytes += entry.text.slice(range.start, range.end) + '\n';
}
filename = entry.url.split('/').pop();
fs.writeFile('./'+filename, final_css_bytes, error => {
if (error) {
console.log('Error creating file:', error);
} else {
console.log('File saved');
}
});
}
})();
Run it with node csscoverage.js
This will output all the CSS you're using into the separate files they appear in (stopping you from merging external libraries into your own code, like the other answer does).

I talked with the engineer who owns this feature. As of Chrome 64 there's still no way to export the results of a coverage analysis.
Star issue #717195 to help the team prioritize this feature request.

I love this simple solution. It works with the Coverage tool in Chrome without any further installation. You can simply use the json file that the Coverage tool lets you export:
https://nachovz.github.io/devtools-coverage-css-generator/
But be aware of the comment below my answer!!! He is right, it's risky. I am still hoping / waiting for an update.

first of all you need to download and install "Google Chrome Dev".
on Google chrome Dev go to Inspect element > Sources > Ctrl+shift+p
Enter "coverage" and select "Start Instrumenting coverage and reload Page"
Then use Export icon
this will give you a json file.
you can also visit : Chrome DevTools: Export your raw Code Coverage Data

I downloaded the latest version of canary and the export button was present.
I then used this PHP script to parse the json file returned. (Where key '6' in the array is the resource to parse). I hope it helps someone!
$a = json_decode(file_get_contents('coverage3.json'));
$sText = $a[6]->text;
$sOut = "";
foreach ($a[6]->ranges as $iPos => $oR) {
$sOut .= substr($sText, $oR->start, ($oR->end-$oR->start))." \n";
}
echo '<style rel="stylesheet" type="text/css">' . $sOut . '</style>';

Chrome canary 73 can do it. You will need Windows or Mac OS. There is an export function (Down arrow icon) next to the record and clear buttons. You'll get a json file and then you can use that to programmatically remove the unused lines.

Here's a version that will keep media queries, based on Christopher Schiefer's:
$jsont = <<<'EOD'
{ "url":"test"}
EOD;
$a = json_decode($jsont);
$sText = $a->text;
preg_match_all('(#media(?>[^{]|(?0))*?{)', $sText, $mediaStartsTmp, PREG_OFFSET_CAPTURE);
preg_match_all("/\}(\s|\\n|\\t)*\}/", $sText, $mediaEndsTmp, PREG_OFFSET_CAPTURE);
$mediaStarts = empty($mediaStartsTmp) ? array() : $mediaStartsTmp[0];
$mediaEnds = empty($mediaEndsTmp) ? array() : $mediaEndsTmp[0];
$sOut = "";
$needMediaClose = false;
foreach ($a->ranges as $iPos => $oR) {
if ($needMediaClose) { //you are in a media query
//add closing bracket if you were in a media query and are past it
if ($oR->start > $mediaEnds[0][1]) {
$sOut .= "}\n";
array_splice($mediaEnds, 0, 1);
$needMediaClose = false;
}
}
if (!$needMediaClose) {
//remove any skipped media queries
while (!empty($mediaEnds) && $oR->start > $mediaEnds[0][1]) {
array_splice($mediaStarts, 0, 1);
array_splice($mediaEnds, 0, 1);
}
}
if (!empty($mediaStarts) && $oR->start > $mediaStarts[0][1]) {
$sOut .= "\n" . $mediaStarts[0][0] . "\n";
array_splice($mediaStarts, 0, 1);
$needMediaClose = true;
}
$sOut .= mb_substr($sText, $oR->start, ($oR->end-$oR->start))." \n";
}
if ($needMediaClose) { $sOut .= '}'; }
echo '<style rel="stylesheet" type="text/css">' . $sOut . '</style>';

That's my python code to extract the code:
import json
code_coverage_filename = 'Coverage-20210613T173016.json'
specific_file_url = 'https://localhost:3000/b.css'
with open(code_coverage_filename) as f:
data = json.load(f)
for entry in data:
pass # print entry['url']
if entry['url'] == specific_file_url:
text = ""
for range in entry['ranges']:
range_start = range['start']
range_end = range['end']
text += entry['text'][int(range_start):int(range_end)]+"\n"
print text
However, there is a problem. Chrome debugger doesn't mark these kind of lines
#media (min-width: 768px) {
So it's a bit problematic to use this technique

More practical version based on Atoms.
Improved to work without any files.
PHP Sandbox http://sandbox.onlinephpfunctions.com/
JSON Formater to be converted to 1line https://www.freeformatter.com/json-formatter.html#ad-output
Unmify it https://unminify.com/
$jsont = <<<'EOD'
{ "url":"test"}
EOD;
$a = json_decode($jsont);
$sText = $a->text;
$sOut = "";
foreach ($a->ranges as $iPos => $oR) {
$sOut .= substr($sText, $oR->start, ($oR->end-$oR->start))." \n";
}
echo '<style rel="stylesheet" type="text/css">' . $sOut . '</style>';

I use this DisCoverage chrome extension, it parses json file from coverage tool

Related

Next js Strapi integration not displaying data

I am trying to build a simple task website to get familiar with full stack development. I am using Next js and Strapi. I have tried all I can think of, but the data from the server just will not display on the frontend. It seems to me that the page loads too soon, before the data has been loaded in. However, I am not a full stack dev and am therefore not sure.
import axios from 'axios';
const Tasks = ({ tasks }) => {
return (
<ul>
{tasks && tasks.map(task => (
<li key={task.id}>{task.name}</li>
))}
</ul>
);
};
export async function getStaticProps() {
const res = await axios.get('http://localhost:1337/tasks');
const data = await res.data;
if (!data) {
return {
notFound: true,
}
} else {
console.log(data)
}
return {
props: { tasks: data },
};
};
export default Tasks;
I had the same issue. You need to call the api from the pages files in the pages folder. I don't know why this is but that's how it works.

ionic error when trying to run with ionic serve

I've downloaded a repository from Git to make amendments to it however, I can't seem to compile it and make it run.
I was prompted to install node modules, #ionic/cli-pl
ugin-gulp and also #ionic/cli-plugin-ionic1 as this was an ionic1 based project.
I keep receiving this error:
C:\Users\User1\Desktop\belfastsalah-master\belfastsalah-master\node_modules\#ionic\cli-plugin-ionic1\dist\serve\live-reload.js:19
let contentStr = content.toString();
^
TypeError: Cannot read property 'toString' of undefined
at Object.injectLiveReloadScript (C:\Users\User1\Desktop\belfastsalah-master\belfastsalah-master\node_modules\#ionic\cli-plugin-ionic1\dist\serve\live-reload.js:19:29)
at ReadFileContext.fs.readFile [as callback] (C:\Users\User1\Desktop\belfastsalah-master\belfastsalah-master\node_modules\#ionic\cli-plugin-ionic1\dist\serve\http-server.js:59:39)
at FSReqWrap.readFileAfterOpen [as oncomplete] (fs.js:366:13)
Below is the code from the JS file the error appears in however, this hasn't been modified by me. It is what I was prompted to install as stated above.
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const path = require("path");
const modules_1 = require("../lib/modules");
function createLiveReloadServer(options) {
const tinylr = modules_1.load('tiny-lr');
const liveReloadServer = tinylr();
liveReloadServer.listen(options.livereloadPort, options.address);
return (changedFiles) => {
liveReloadServer.changed({
body: {
files: changedFiles.map(changedFile => ('/' + path.relative(options.wwwDir, changedFile)))
}
});
};
}
exports.createLiveReloadServer = createLiveReloadServer;
function injectLiveReloadScript(content, host, port) {
let contentStr = content.toString();
const liveReloadScript = getLiveReloadScript(host, port);
if (contentStr.indexOf('/livereload.js') > -1) {
return content;
}
let match = contentStr.match(/<\/body>(?![\s\S]*<\/body>)/i);
if (!match) {
match = contentStr.match(/<\/html>(?![\s\S]*<\/html>)/i);
}
if (match) {
contentStr = contentStr.replace(match[0], `${liveReloadScript}\n${match[0]}`);
}
else {
contentStr += liveReloadScript;
}
return contentStr;
}
exports.injectLiveReloadScript = injectLiveReloadScript;
function getLiveReloadScript(host, port) {
if (host === '0.0.0.0') {
host = 'localhost';
}
const src = `//${host}:${port}/livereload.js?snipver=1`;
return ` <!-- Ionic Dev Server: Injected LiveReload Script -->\n` + ` <script src="${src}" async="" defer=""></script>`;
}
Any help would be greatly appreciated.
Thanks
You should check if, after all bundling/generation is done, www/index.html exists
Had this problem after extensive experiments with index.html generation what resulted with it being gone ;)

jupyter-js-services - how to save notebook

I'm trying to use jupyter as a backend for my system and now I play with examples from jupyter-js-api docs.
Using IKernel and INotebookSession I managed to execute simple code and get the response form kernel.
But I can's figure out how to extract the notebook itself. there's nothing like "saveNotebook()" in API. I try to execute session.renameNotebook(), it completes successfully, but no files appear in filesystem (tried different paths like "/tmp/trynote.ipynb" "trynote.ipnb" and so on...).
Here's the code, it is slightly edited example from http://jupyter.org/jupyter-js-services/ page
#!/usr/bin/env node
var jpt = require("jupyter-js-services");
var xr = require("xmlhttprequest");
var ws = require("ws");
global.XMLHttpRequest = xr.XMLHttpRequest;
global.WebSocket = ws;
// start a new session
var options = {
baseUrl: 'http://localhost:8889',
wsUrl: 'ws://localhost:8889',
kernelName: 'python',
notebookPath: 'trynote.ipynb'
};
jpt.startNewSession(options).then((session) => {
// execute and handle replies on the kernel
var future = session.kernel.execute({ code: 'print(5 * 5);' });
future.onDone = (msg) => {
console.log('Future is fulfilled: ');
console.log(msg);
};
future.onIOPub = (msg) => {
console.log("Message in IOPub: ");
console.log(msg);
};
// rename the notebook
session.renameNotebook('trynote2.ipynb').then(() => {
console.log('Notebook renamed to', session.notebookPath);
});
// register a callback for when the session dies
session.sessionDied.connect(() => {
console.log('session died');
});
// kill the session
session.shutdown().then(() => {
console.log('session closed');
});
});
Looking and ContentManager API it seems to work with already existing files, or creating new ones, but its unclear how is it bound to sessions.
More, even simplest try to use "newUntitled" function gives 404 response...
var contents = new jpt.ContentsManager('http://localhost:8889');
// create a new python file
contents.newUntitled("foo", { type: "file", ext: "py" }).then(
(model) => {
console.log(model.path);
}
);
I feel a bit disoriented with all this and would appreciate any explanations.
Thanks..

Log in to Facebook with phantomjs - 302 issues?

I'm trying to write a phantomjs script to log in to my facebook account and take a screenshot.
Here's my code:
var page = require('webpage').create();
var system = require('system');
var stepIndex = 0;
var loadInProgress = false;
email = system.args[1];
password = system.args[2];
page.onLoadStarted = function() {
loadInProgress = true;
console.log("load started");
};
page.onLoadFinished = function() {
loadInProgress = false;
console.log("load finished");
};
var steps = [
function() {
page.open("http://www.facebook.com/login.php", function(status) {
page.evaluate(function(email, password) {
document.querySelector("input[name='email']").value = email;
document.querySelector("input[name='pass']").value = password;
document.querySelector("#login_form").submit();
console.log("Login submitted!");
}, email, password);
page.render('output.png');
});
},
function() {
console.log(document.documentElement.innerHTML);
},
function() {
phantom.exit();
}
]
setInterval(function() {
if (!loadInProgress && typeof steps[stepIndex] == "function") {
console.log("step " + (stepIndex + 1));
steps[stepIndex]();
stepIndex++;
}
if (typeof steps[stepIndex] != "function") {
console.log("test complete!");
phantom.exit();
}
}, 10000);
(Inspired by this answer, but note that I've upped the interval to 10s)
Called like so:
./phantomjs test.js <email> <password>
With output (filtering out the selfxss warnings from Facebook):
step 1
load started
load finished
Login submitted!
load started
load finished
step 2
<head></head><body></body>
step 3
test
complete!
(Note that the html output in step two is empty)
This answer suggests that there are problems with phantomjs' SSL options, but running with --ssl-protocol=any has no effect.
This appears to be a similar problem, but for caspar, not phantomjs (and on Windows, not Mac) - I've tried using --ignore-ssl-errors=yes, but that also had no effect.
I guessed that this might be a redirection problem (and, indeed, when I replicate this on Chrome, the response from clicking "Submit" was a 302 Found with location https://www.facebook.com/checkpoint/?next), but according to this documentation I can set a page.onNavigationRequested handler - when I do so in my script, it doesn't get called.
I think this issue is related, but it looks as if there's no fix there.

Sails.js checking stuff before uploading files to MongoDB with skipper (valid files, image resizing etc)

I'm currently creating a file upload system in my application. My backend is Sails.js (10.4), which serves as an API for my separate front-end (Angular).
I've chosen to store the files I'm uploading to my MongoDB instance, and using sails' build in file upload module Skipper. I'm using the adapter skipper-gridfs (https://github.com/willhuang85/skipper-gridfs) to upload the files to mongo.
Now, it's not a problem to upload the files themselves: I'm using dropzone.js on my client, which sends the uploaded files to /api/v1/files/upload. The files will get uploaded.
To achieve this i'm using the following code in my FileController:
module.exports = {
upload: function(req, res) {
req.file('uploadfile').upload({
// ...any other options here...
adapter: require('skipper-gridfs'),
uri: 'mongodb://localhost:27017/db_name.files'
}, function(err, files) {
if (err) {
return res.serverError(err);
}
console.log('', files);
return res.json({
message: files.length + ' file(s) uploaded successfully!',
files: files
});
});
}
};
Now the problem: I want to do stuff with the files before they get uploaded. Specifically two things:
Check if the file is allowed: does the content-type header match the file types I want to allow? (jpeg, png, pdf etc. - just basic files).
If the file is an image, resize it to a few pre-defined sizes using imagemagick (or something similar).
Add file-specific information that will also be saved to the database: a reference to the user who has uploaded the file, and a reference to the model (i.e. article/comment) the file is part of.
I don't have a clue where to start or how to implement this kind of functionality. So any help would be greatly appreciated!
Ok, after fiddling with this for a while I've managed to find a way that seems to work.
It could probably be better, but it does what I want it to do for now:
upload: function(req, res) {
var upload = req.file('file')._files[0].stream,
headers = upload.headers,
byteCount = upload.byteCount,
validated = true,
errorMessages = [],
fileParams = {},
settings = {
allowedTypes: ['image/jpeg', 'image/png'],
maxBytes: 100 * 1024 * 1024
};
// Check file type
if (_.indexOf(settings.allowedTypes, headers['content-type']) === -1) {
validated = false;
errorMessages.push('Wrong filetype (' + headers['content-type'] + ').');
}
// Check file size
if (byteCount > settings.maxBytes) {
validated = false;
errorMessages.push('Filesize exceeded: ' + byteCount + '/' + settings.maxBytes + '.');
}
// Upload the file.
if (validated) {
sails.log.verbose(__filename + ':' + __line + ' [File validated: starting upload.]');
// First upload the file
req.file('file').upload({}, function(err, files) {
if (err) {
return res.serverError(err);
}
fileParams = {
fileName: files[0].fd.split('/').pop().split('.').shift(),
extension: files[0].fd.split('.').pop(),
originalName: upload.filename,
contentType: files[0].type,
fileSize: files[0].size,
uploadedBy: req.userID
};
// Create a File model.
File.create(fileParams, function(err, newFile) {
if (err) {
return res.serverError(err);
}
return res.json(200, {
message: files.length + ' file(s) uploaded successfully!',
file: newFile
});
});
});
} else {
sails.log.verbose(__filename + ':' + __line + ' [File not uploaded: ', errorMessages.join(' - ') + ']');
return res.json(400, {
message: 'File not uploaded: ' + errorMessages.join(' - ')
});
}
},
Instead of using skipper-gridfs i've chosen to use local file storage, but the idea stays the same. Again, it's not as complete as it should be yet, but it's an easy way to validate simple stuff like filetype and size. If somebody has a better solution, please post it :)!
You can specify a callback for the .upload() function. Example:
req.file('media').upload(function (error, files) {
var file;
// Make sure upload succeeded.
if (error) {
return res.serverError('upload_failed', error);
}
// files is an array of files with the properties you want, like files[0].size
}
You can call the adapter, with the file to upload from there, within the callback of .upload().