Code splitting with Rollup and Svelte changes all chunk names with new hashes even with no modifications - hash

I'm using Svelte and Rollup with code splitting, and here are some parts of my rollup.config.js:
input: {
'boot': 'src/boot.js',
'app': 'src/app.js',
'agency': 'src/modules/agency.js',
'buyer': 'src/modules/buyer.js',
'buyer-group': 'src/modules/buyer-group.js',
'investor': 'src/modules/investor.js',
'management-unit': 'src/modules/management-unit.js',
'platform': 'src/modules/platform.js',
'supplier': 'src/modules/supplier.js',
'tables': 'src/modules/tables.js',
'pt-BR': 'src/core/locale/pt-BR.js',
'en': 'src/core/locale/en.js',
'external-svelte-package': 'node_modules/external-svelte-package/src/index.js'
},
output: {
sourcemap: false,
format: 'esm',
dir: `${baseDir}/js`,
entryFileNames: '[name]-[hash].js',
chunkFileNames: '[name].[hash].js'
},
Rollup generates chunks with names containing hashes (e.g. investor-fa42bee8.js).
If I run build script again, with no modifications in any file of the project, all the chunks are generated with new hashes, and this behavior harms client long term cache.
How can I change this behavior and grants same hash for all chunks that are not modified?
Any help will be wlcome.
Thanks in advance.

If the hashes change, something in your sources changes. Likely a plugin or a banner that contains a timestamp or something like that.

Related

Is there a way to watch file system events inside node_modules folder for vscode-languageclient?

Inside the extensions activate(context:ExtensionContext) function, I want to add a FileSystemWatcher. While this works for e.g
const clientOptions: LanguageClientOptions = {
documentSelector: [{scheme: 'file', language: 'plainText'}],
synchronize: {
fileEvents: vscode.workspace.createFileSystemWatcher('**/someFolder/*.txt')
}
}
If I now want to watch a file inside the node_modules folder, nothing happens.. any idea?
There's a "files.watcherExclude" setting with the following defaults:
"files.watcherExclude": {
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/node_modules/*/**": true,
"**/.hg/store/**": true
}
Configure glob patterns of file paths to exclude from file watching. Patterns must match on absolute paths (i.e. prefix with ** or the full path to match properly). Changing this setting requires a restart. When you experience Code consuming lots of CPU time on startup, you can exclude large folders to reduce the initial load.
Performance-wise it's probably not advisable to remove node_modules from here either, since it can contain a lot of files. In any case, since it's a user setting, you're not in control of this as an extension author.

Get the list of PNG files in File Storage excluding _processed_ folder

As topic says, I need to get only unprocessed PNG files.
My current approach is the following:
$fileExtensionFilter = $this->objectManager->get(FileExtensionFilter::class);
$fileExtensionFilter->setAllowedFileExtensions('png');
$storage->addFileAndFolderNameFilter([$fileExtensionFilter, 'filterFileList']);
$availablePngFiles = $storage->getFileIdentifiersInFolder($storage->getRootLevelFolder(false)->getIdentifier(), true, true);
foreach ($availablePngFiles as $pngFile) {
if(!$storage->isWithinProcessingFolder($pngFile)) {
$pngFileObject = $storage->getFile($pngFile);
}
}
So, it works, but I'd like to avoid the unnecessary isWithinProcessingFolder() lookup and get only the original unprocessed files, which will significantly reduce the number of loops.
TYPO3 core 7.6.19 does only ship with two filters: FileExtensionFilter and FileNameFilter, which actually is a "hidden file filter".
You could write your own file filter and filter in there, but that's way more work than keeping those two lines of code.

Babel plugins run order

TL;DR: Is there a way how to specify the order in which the Babel plugins are supposed to be run? How does Babel determine this order? Is there any spec how this works apart from diving into Babel sources?
I'm developing my own Babel plugin. I noticed, that when I run it, my plugin is run before other es2015 plugins. For example having code such as:
const a = () => 1
and visitor such as:
visitor: {
ArrowFunctionExpression(path) {
console.log('ArrowFunction')
},
FunctionExpression(path) {
console.log('Function')
},
}
my plugin observes ArrowFunction (and not Function). I played with the order in which the plugins are listed in Babel configuration, but that didn't change anything:
plugins: ['path_to_myplugin', 'transform-es2015-arrow-functions'],
plugins: ['transform-es2015-arrow-functions', 'path_to_myplugin'],
OTOH, this looks like the order DOES somehow matter:
https://phabricator.babeljs.io/T6719
---- EDIT ----
I found out that if I write my visitor as follows:
ArrowFunctionExpression: {
enter(path) {
console.log('ArrowFunction')
}
},
FunctionExpression: {
exit(path) {
console.log('Function')
}
},
both functions are called. So it looks like the order of execution is: myplugin_enter -> other_plugin -> myplugin_exit. In other words, myplugin seems to be before other_plugin in some internal pipeline. The main question however stays the same - the order of plugins in the pipeline should be determined & configurable somehow.
The order of plugins is based on the order of things in your .babelrc with plugins running before presets, and each group running later plugins/presets before earlier ones.
The key thing though is that the ordering is per AST Node. Each plugin does not do a full traversal, Babel does a single traversal running all plugins in parallel, with each node processed one at a time running each handler for each plugin.
Basically, what #loganfsmyth wrote is correct; there is (probably) no more magic in plugin ordering itself.
As for the my problem specifically, my confusion was caused by how arrow function transformation works. Even if the babel-plugin-transform-es2015-arrow-functions plugin mangles the code sooner than my plugin, it does not remove the original arrow-function ast node from the ast, so even the later plugin sees it.
Learning: when dealing with Babel, don't underestimate the amount of debug print statements needed to understand what's happening.

Setting the --in-source-map uglify2 config option in r.js build config file

I wanted to use the --in-source-map option that is available in UglifyJS2 for consuming a source map file and outputting a new one along with the compressed js, thereby performing a multi-level source mapping. I am using r.js to do the minification of the javascript files and so I saw there was a way to define config values that will be passed on to the UglifyJS2. Here is a sample showing how it is done in the r.js build file:
uglify2: {
//Example of a specialized config. If you are fine
//with the default options, no need to specify
//any of these properties.
output: {
beautify: true
},
compress: {
sequences: false,
global_defs: {
DEBUG: false
}
},
warnings: true,
mangle: false
},
Obvously r.js has its own way of structuring the config options and I was not able to figure out how to set the --in-source-map option following this structure. I tried putting the following statement in the output element or the compress element or even outside, next to the warnings config option.
in-source-map: sample.map
I have also tried putting quotes around the file name. Unfortunately none of the methods worked. Can anyone help me figure out this problem? Can it also be the case that this option is not supported in r.js?

Extend multiple sources / indexes

I have many web pages that are clones of each other. They have the exact same database
structure, just different data in different databases (each clone is for a different country so everything is
separated).
I would like to clean up my sphinx config file so that I don't duplicate the same queries
for every site.
I'd like to define a main source (with db auth info) for every clone, a common source for
every table I'd like to search, and then sources&indexes for every table and every clone.
But I'm not sure how exactly I should go about doing that.
I was thinking something among this lines:
index common_index
{
# charset_type, stopwords, etc
}
source common_clone1
{
# sql_host, sql_user, ...
}
source common_clone2
{
# sql_host, sql_user, ...
}
# ...
source table1
{
# sql_query, sql_attr_*, ...
}
source clone1_table1 : ???
{
# ???
}
# ...
index clone1_table1 : common_index
{
source: clone1_table1
#path, ...
}
# ...
So you can see where I'm confused :)
I though I could do something like this:
source clone1_table1 : table1, common_clone1 {}
but it's not working obviously.
Basically what I'm asking is; is there any way to extend two sources/indexes?
If this isn't possible I'll be "forced" to write a script that will generate my sphinx config file to ease maintenance.
Apparently this isn't possible (don't know if it's in the pipeline for the future). I'll have to resort to generating the config file with some sort of script.
I've created such a script, you can find it on GitHub: sphinx generate config php