JSDoc: Lookup tutorials from different directories - jsdoc

Is there a way to ask JSDoc (either in the command line or through grunt-jsdoc plugin) to lookup tutorials from different directories ?
As per the documentation, -u allows to specify the Directory in which JSDoc should search for tutorials. (it says the Directory instead of Directories).
I tried the following with no luck:
specify different strings separated by space or comma
specify one string with shell/ant regular expression

As suggested by #Vasil Vanchuk, a solution would be creating links to all tutorial files within a single directory. As such, JSDoc3 will be happy and it will proceed with the generation of all tutorials.
Creating/maintaining links manually would be a tedious task. Hence, and for people using grunt, the grunt-contrib-symlink come in handy. Using this plugin, the solution is reduced to a config task.
My Gruntfile.js looks like the following:
clean:['tmp', 'doc'],
symlink: {
options: {
overwrite: false,
},
tutorials: {
files: [{
cwd: '../module1/src/main/js/tut',
dest: 'tmp/tutorial-generation-workspace',
expand: true,
src: ['*'],
}, {
cwd: '../module2/src/main/js/tut',
dest: 'tmp/tutorial-generation-workspace',
expand: true,
src: ['*'],
}]
}
},
jsdoc: {
all: {
src: [
'../module1/src/main/js/**/*.js',
'../module2/src/main/js/**/*.js',
'./README.md',
],
options: {
destination: 'doc',
tutorials: 'tmp/tutorial-generation-workspace',
configure : "jsdocconf.json",
template: 'node_modules/grunt-jsdoc/node_modules/ink-docstrap/template',
},
}
},
grunt.loadNpmTasks('grunt-contrib-symlink');
grunt.loadNpmTasks('grunt-jsdoc');
grunt.registerTask('build', ['clean', 'symlink', 'jsdoc']);
grunt.registerTask('default', ['build']);
Integrating new module is translated by updating symlink and jsdoc tasks.

You could just copy the files instead of linking to a bunch of directories.
E.g. create a dir in your project for documentation in which you'll copy over all relevant tutorials from where ever.
Then, in your npm scripts you can have something like this:
"copy:curry": "cp node_modules/#justinc/jsdocs/tutorials/curry.md doc/tutorials",
"predocs": "npm run copy:curry",
The docs script (not shown) runs jsdoc. predocs runs automatically before docs and, in this case copies over a tutorial in one of my packages over to doc/tutorials. You can then pass doc/tutorials as the single directory housing all your tutorials.
In predocs you can keep adding things to copy with bash's && - or if that's not available for whatever reason, you'll find npm packages which let you do this (therefore not relying on whatever shell you're using).
Now that I think about it, it's best to also delete doc/tutorials in predocs:
"predocs": "rm -rf doc/tutorials && mkdir -p doc/tutorials && npm run copy:tutorials",
That way any tutorials you once copied there (but are now not interested in) will be cleared each time you generate the docs.
btw, I opened an issue for this: https://github.com/jsdoc3/jsdoc/issues/1330

Related

How to copy specific headers in Soong

How can I extract specific files in Soong to use as headers?
I was recently writing a blueprint (Android.bp) file for paho-mqtt-c. My consumers require MQTTClient.h which paho-mqtt-c stores in src/ - what I would consider a "private" location. Reading their CMakeLists.txt file, it actually installs this and some other headers to include/.
As far as I can tell, Soong doesn't have this concept of installing so it seems like I could export_include_dirs the src directory - which seems wrong, or use a cc_genrule to copy these headers elsewhere.
But that's where I hit another issue: I can't seem to figure out how to create a cc_genrule that takes n inputs and writes n outputs (n-to-n). i.e.
cc_genrule {
name: "paho_public_headers",
cmd: "cp $(in) $(out)",
srcs: [ "src/MQTTAsync.h", "src/MQTTClient.h", "src/MQTTClientPersistence.h", "src/MQTTLogLevels.h" ]
out: [ "public/MQTTAsync.h", "public/MQTTClient.h", "public/MQTTClientPersistence.h", "public/MQTTLogLevels.h" ],
}
results in the failed command cp <all-inputs> <all-outputs>, rather than what I wanted which would be closer to iterating the command over each input/output pairs.
My solution was simply to write four cc_genrules, but that doesn't seem great either.
Is there a better way? (ideally without writing a custom tool)
The solution was to use gensrcs with a shard_size of 1.
For example:
gensrcs {
name: "paho_public_headers",
cmd: "mkdir -p $(genDir) && cat $(in) > $(out)",
srcs: [":paho_mqtt_c_header_files"],
output_extension: "h",
shard_size: 1,
export_include_dirs: ["src"],
}
The export_include_dirs is important, as with gensrcs there is little control over the output filename other than the extension, so it's easiest to keep the directory structure $(in) on the $(out) files.

React-Snap with Create-React-App and Service Workers

So, my understanding is that react-snap as per its features "Works out-of-the-box with create-react-app - no code-changes required."
I read through the documentation and I see that it required some adjusting to work with Google Analytics which I implemented.
However, it also suggests changes to be made if one is going to use the default service worker that comes with CRA.
https://github.com/stereobooster/react-snap#service-workers
However, what is confusing is that it seems one has to perform a EJECT in order to make the necessary change.
navigateFallback: publicUrl + '/index.html',
You need to change this to an un-prerendered version of index.html - 200.html, otherwise you will see index.html flash on other pages (if you have any). See Configure sw-precache without ejecting for more information.
My question is - and note I am quite novice - does one have to eject? I kinda want to keep things simple. The only place I could find this line was in WebPack. navigateFallback
Also, if I don't see the negative side of the flashes on pages as per the documentation, is it okay to omit this step or will it have issues on other things?
Although this question is more than a year old, I'd like to take the opportunity as I've been able to implement service workers in react-snap (although with a varying degree of success).
Here's stereobooster's reference in GitHub:
https://github.com/stereobooster/react-snap/blob/master/doc/recipes.md#configure-sw-precache-without-ejecting
You can configure it without ejecting. What you need to do is the following:
Download and install sw-precache and ugfify-js:
npm install sw-precache uglify-js --save-dev
or
yarn add sw-precache uglify-js -D
Then, in your package.json add the following entries:
(Replace the build script with the following)
"scripts": {
"generate-sw": "sw-precache --root=build --config scripts/sw-precache-config.js && uglifyjs build/service-worker.js -o build/service-worker.js",
"build": "react-scripts build && react-snap && yarn run generate-sw"
}
Then, create a folder in the root level (next to your package.json) called scripts
and add sw-precache-config.js file.
module.exports = {
// a directory should be the same as "reactSnap.destination",
// which default value is `build`
staticFileGlobs: [
"build/static/css/*.css",
"build/static/js/*.js",
"build/shell.html",
"build/index.html"
],
stripPrefix: "build",
publicPath: ".",
// there is "reactSnap.include": ["/shell.html"] in package.json
navigateFallback: "/shell.html",
// Ignores URLs starting from /__ (useful for Firebase):
// https://github.com/facebookincubator/create-react-app/issues/2237#issuecomment-302693219
navigateFallbackWhitelist: [/^(?!\/__).*/],
// By default, a cache-busting query parameter is appended to requests
// used to populate the caches, to ensure the responses are fresh.
// If a URL is already hashed by Webpack, then there is no concern
// about it being stale, and the cache-busting can be skipped.
dontCacheBustUrlsMatching: /\.\w{8}\./,
// configuration specific to this experiment
runtimeCaching: [
{
urlPattern: /api/,
handler: "fastest"
}
]
};
Note, if you're not using an app-shell but you're loading the whole page (Meaning there's no dyanmic content), replace where it says navigateFallback: "/shell.html" with navigateFallback: "/200.html"
This basically allows you to cache the entire page
You can look for more information here:
https://github.com/stereobooster/an-almost-static-stack
One thing that I'd recommend to check (I'm close to start that process as well) is the workbox-sw.
What to do if React-Snap fails
error at / TypeError: Cannot read property 'ok' of null
Or
ERROR: The process with PID 38776 (child process of PID 26920) could not be terminated. \node_modules\minimalcss\src\run.js:13:35)
Reason: There is no running instance of the task.
You may get these infamous errors. I don't know exactly what causes them, but I know they're mentioned here, and here. In this case, delete the build folder, open a new terminal window, and try again.
If the problem still persists, then break down the script:
Do:
"scripts": {
"build": "react-scripts build"
"postbuild": "react-snap",
"generate-sw": "sw-precache --root=build --config scripts/sw-precache-config.js && uglifyjs build/service-worker.js -o build/service-worker.js",
}
And try running them independently.

Include node module as a file to inject

I want to include /node_modules/es6-shim/es6-shim.min.js into the client side in Sails v0.11.
I've tried including it into the pipeline as such:
var jsFilesToInject = [
// Load sails.io before everything else
'js/dependencies/sails.io.js',
/* INCLUDE NODE MODULE */
'/node_modules/es6-shim/es6-shim.min.js',
// Dependencies like jQuery, or Angular are brought in here
'js/dependencies/**/*.js',
// All of the rest of your client-side js files
// will be injected here in no particular order.
'js/**/*.js',
// Use the "exclude" operator to ignore files
// '!js/ignore/these/files/*.js'
];
Is this possible? I don't really want to use bower or a CDN, I would like to install/update the dependency via npm.
The simplest way to accomplish this would be to leave the pipeline.js file alone and just make a symlink inside of assets/js pointing to the file you want, e.g.:
ln -s ../../node_modules/es6-shim/es6-shim.min.js assets/js/es6-shim.min.js
Next time you run sails lift, Grunt will see the new Javascript file in your assets/js folder and process it with the rest.
If this is for some reason not an option, you'll need to add a new subtask to the tasks/copy.js Grunt task:
dev_es6: {
files: [{
expand: true,
src: ['./node_modules/es6-shim/es6-shim.min.js'],
dest: '.tmp/public/js'
}]
}
and then add that to the compileAssets task in tasks/register/compileAssets:
module.exports = function (grunt) {
grunt.registerTask('compileAssets', [
'clean:dev',
'jst:dev',
'less:dev',
'copy:dev',
'copy:dev_es6', // <-- adding our new subtask
'coffee:dev'
]);
};

Gulp + Browserify: CoffeeScript not loading when loading files from node_modules

After setting up the folder structure for my Gulp project, I was wondering how to do paths in browserify, and found this page: https://github.com/substack/browserify-handbook#organizing-modules. It recommends putting common application parts in a subfolder of node_modules. This appears to be working, it's getting the files, but it's not applying my coffeeify transform, so it's throwing errors because it's trying to interpret them as JS. Any ideas how to fix this? This is my browserify config:
browserify: {
// Enable source maps
debug: true,
// Additional file extentions to make optional
extensions: ['.coffee', '.hbs'],
// A separate bundle will be generated for each
// bundle config in the list below
bundleConfigs: [{
entries: src + '/javascript/app.coffee',
dest: dest,
outputName: 'app.js'
}, {
entries: src + '/javascript/head.coffee',
dest: dest,
outputName: 'head.js'
}]
}
and these are the relevant bits form my package.json.
"browserify": {
"transform": [
"coffeeify",
"hbsfy"
]
}
Transfroms aren't applied to files in node_modules unless they are marked as being global: https://github.com/substack/node-browserify#btransformtr-opts. If you choose to make it global, be warned, the documentation suggests against it:
Use global transforms cautiously and sparingly, since most of the time
an ordinary transform will suffice.
You won't be able to specify the tranform in package.json:
You can also not configure global transforms in a package.json like
you can with ordinary transforms.
The two options are programmatically, by passing {global: true} as options or at the command line with the -g option:
browserify -g coffeeify main.coffee > bundle.js

Is there any good grunt ftp plugin?

I've tried grunt-ftpush and grunt-ftp-deploy but both doesn't work properly. I experience annoyning bugs with them. FTP task seems very important and it's weird that I can't google working one.
UPDATED
Here is settings for grunt-ftp
ftp: {
options: {
host: 'myhostname',
user: 'myusername',
pass: 'mypassword'
},
upload: {
files: {
'codebase/myprojectfolder': 'build/*'
}
}
}
I expect that my local folder build will be copied to the server but I got an error
Fatal error: Unable to read "build/scripts" file (Error code: EISDIR).
Documentation is very poor, so I have no idea how to upload folders which has folders in it.
I tried many FTP plugins and, to my mind, only ftp_push was good enough for me. All other plugins lie on minimap which seemed buggy (when selecting which files to upload or not). Moreover, the idea to use a separate file to handle auth keys has no viable using : if we want to store FTP data in an external JSON file and put it inside our Gruntfile.js, it is not possible at all... The developer should choose by himself what to do with authentification and not rely on an external auth system.
Anyway, the project is alive and Robert-W has fix many issues really quickly : it's a major advantage too when we're developing. Projects that are quite dead are really painful.
https://github.com/Robert-W/grunt-ftp-push
I have been looking for a actual working way to pushing single files for quite a while, but I come down to using a shell script that makes the ftp call and run the script with grunt-exec (npm link). This seemed way simpler than getting any of the ftp plugins to work. This would work for any *nix system.
script.sh
ftp -niv ftp.host.com << EOF
user foo_user password
cd /path/to/destination
put thefilepush.txt
EOF
Gruntfile.js
exec: {
ftpupload: {
command: './script.sh'
}
}
Yes grunt-ftp and grunt-sftp-deploy worked well for me.
Try grunt-ftp
grunt.initConfig({
ftp: { // Task
options: { // Options
host: 'website.com',
user: 'johndoe',
pass: '1234'
},
upload: { // Target
files: { // Dictionary of files
'public_html': 'src/*' // remote destination : source
}
}
}
});
grunt.loadNpmTasks('grunt-ftp');
grunt.registerTask('default', ['ftp']);
After following old or misinformation I was finally to get grunt-ftps-deploy to work for uploading individual files to a remote server inside a docker. Here is the grunt task:
ftps_deploy: {
deploy: {
options: {
auth:{
host:'xx.xx.xx.xx',
authKey: 'key1',
port: 22,
secure: true
},
silent:false
//progress: true
},
files: [{
expand: true,
cwd: './',
src: ['./js/clubServices.js', './js/controllers.js', './css/services.css'],
dest: '/data/media/com_memberservices/',
}]
}
}
Initially I just received no files transferred response. The key to getting it working was specifying the "src" files correctly in an array. None of the information I found specifically addressed this issue. Hope this helps someone else. See update below
yoda57

Categories