How to run jsdoc on whole directory in ubuntu - jsdoc

I just need to run jsdoc on ai whole directory containing .js files, I am doing this on individual files in ubuntu terminal by issuing command jsdoc abc.js but what I need is to apply this command on all files in the directory at once,so that all files in that directory containg js files would be generated by a single command. Thanks for any help you would give.

You can go recursive by passing -r parameter:
$ jsdoc -r .

Even though it's only asked how to run JSDoc for a specific directory, I think this thread could use some more information so people reading this can be aware of other strategies that could be useful.
Running JSDoc for a specific directory
This is the simplest one and the direct answer to the question. You can run JSDoc for all files inside a directory using the --recurse or -r option.
Example:
$ jsdoc -r ./src/
This will run JSDoc for all files inside the src/ directory and its subdirectories.
Running JSDoc for multiple directories
This and the following sections aren't exactly what the question asked for but hopefully will be useful for people using a search engine that found this thread.
You'll probably need to run JSDoc for all the files located in different directories. For this you can just use multiple arguments with the --recurse option.
Example:
$ jsdoc -r ./client/ ./server/
This will run JSDoc for all files inside both client/ and server/ and its subdirectories.
NOT running JSDoc for some directories
This is slightly more complicated and will require use of JSDoc's configuration file. After creating the configuration file, you can run JSDoc using the --configure (or -c) option to select the configuration file you want to use.
Example:
Create a file conf.json as shown bellow:
{
"source": {
"include": [ "." ],
"exclude": [ "node_modules/" ]
}
}
Then run JSDoc like that:
jsdoc -c ./conf.json -r
With that JSDoc will be run for all files inside your current directory and its subdirectories except for the ones located inside node_modules/ and its subdirectories.
Sources
For more info on the available options for the jsdoc command see here
For more info on the JSDoc configuration file see here

Related

Build libraries in external folder of AOSP source code

I noticed that a simple
$ . build/envsetup.sh
$ lunch
$ aosp_hammerhead-eng
$ make -j16
Would not build also the external libraries in the ./external folder.
How am I supposed to build source code in such folder?
In particular, I am modifying source code in the libselinux in ./external/selinux/libselinux/src/
Thanks!
I found out that, by using the mm command, it is possible to build all of the modules in the current directory.
So, if you are in ./external/selinux/libselinux/ you can build all code inside such directory just by typing the command mm.
I also found that the same code I was modifying inside the ./external/selinux/libselinux/ is also located in ./external/libselinux/. However, this directory is linked to the make -j16 command.

how to parse files using ebrowse

I have a folder tree which contains my C++ files. After reading this document,
http://www.gnu.org/software/emacs/manual/html_node/ebrowse/Generating-browser-files.html#Generating-browser-files
still don't know how to parse all my c++ files in folder tree easily.
I can execute the command below in each folder manually, but looks stupid. I can write some scripts to do it recursively, but want to know any better idea here.
ebrowse *.h
I use ebrowse at work. I don't have my bash alias at hand, but from memory it looks like that:
ebrowse $(find . -name "*.[hc]pp")
Don't hesitate to replace the . with the path to the root of your project.
How about open it in dired buffer, then M-xfind-name-diredRETRET*.ht!ebrowse * ?
In other words: use dired to locate all files you need, then run shell command on them, shell command being ebrowse?
How'bout ebrowse **/*.h **/*.cpp? Don't know which shells support ** nowadays, but at least Zsh has supported it for a decade or two.

Parse files without extensions

I'm setting up the doxygen for a project. The module files have their language standard extension (.py), but the executable scripts do not. How can I get doxygen to read these correctly (Python in this case)? I tried
EXTENSION_MAPPING = ''=Python
But that looks for files named "blah.". I'm on a Unix system, so the concept of a file extension doesn't even exist here. And this is an existing project, so renaming all of the existing scripts is not an option.
Any ideas?
I modified doxygen to handle filenames without dots in them, and I'll submit the patch to the maintainers.
One simple trick is to make a symbolic link to the script that does have the right extension, and let doxygen then process the symbol link.
Say you have a python script called test, then do
ln -s test test.py
and then specify the test.py file in doxygen's configuration file
INPUT = test.py
According to doxygen's EXTENSION_MAPPING docs,
EXTENSION_MAPPING = ".no_extension=python"
should work.

Why wget doesn't get java file recursively?

I am trying to download all the folder structure and files under a folder in a website using wget.
Say there is a website like:
http://test/root. Under root it is like
/A
/A1/file1.java
/B
/B1/file2.html
My wget cmd is:
wget -r http://test/root/
I got all the folders and the html files, but no java files. Why is that?
UPDATE1:
I can access the file in the browser using:
http://test/root/A/A1/file1.java
I can also download this individual file using:
wget http://test/root/A/A1/file1.java
wget can just follow links.
If there is no link to the files in the subdirectories, then wget will not find those files. wget will not guess any file-names, it will not test exhaustively for filenames and wget does not practice black magic.
Just because you can access the files in a browser does not mean that wget can necessarily retrieve it. Your browser has code able to recognize the directory structure, wget only knows what you tell it.
You can try adding the java file to an accept list first, perhaps that's all it needs:
wget -r -A "*.java" http://text/root
But it sounds like you're trying to get a complete offline miror of the site. Let's start—as with any command we're trying to figure out—with man wget:
Wget can follow links in HTML, XHTML, and CSS pages, to create local
versions of remote web sites, fully recreating the directory structure
of the original site. This is sometimes referred to as "recursive
downloading." While doing that, Wget respects the Robot Exclusion
Standard (/robots.txt). Wget can be instructed to convert the links in
downloaded files to point at the local files, for offline viewing.
What We Need
1. Proper links to the file to be downloaded.
In your intex.html file, you must provide a link to the Java file, otherwise wget will not recognize it as needing to be downloaded. For your current directory structure, ensure file2.html contains a link to the java file. Format it to link to a directory above the current one:
JavaFile
However, if the file1.java is not sensitive and you routinely do this, it's cleaner and less code to put an index.html file in your root directory and link to:
JavaFile
If you only want the Java files and want to ignore HTML, you can use --reject like so:
wget -r -nH --reject="file2.html"
### Or to reject ALL html files ###
wget -r -nH --reject="*.html"
This will recursively (-r) go through all directories starting at the point we specify.
2. Respect robots.txt
Ensure that if you have a /robots.txt file in your */root/* directory it does not prevent crawling. If it does, you need to instruct wget to ignore it using the following option in your wget command by adding:
wget ... -e robots=off http://test/root
3. Convert remote links to local files.
Additionally, wget must be instructed to convert links into downloaded files. If you've done everything above correctly, you should be fine here. The easiest way I've found to get all files, provided nothing is hidden behind a non-public directory, is using the mirror command.
Try this:
wget -mpEk http://text/root/
# If robots.txt is present:
wget -mpEk robots=off http://text/root/
Using -m instead of -r is preferred as it doesn't have a maximum recursion depth and it downloads all assets. Mirror is pretty good at determining the full depth of a site, however if you have many external links you could end up downloading more than just your site, which is why we use -p -E -k. All pre-requisite files to make the page, and a preserved directory structure should be the output. -k converts links to local files.
Since you should have a link set up, you should get a file1.java inside the ../A1/ directory. However this command should work as is without a specific link being placed to the java file inside of your index.html or file2.html but it doesn't hurt as it preserves the rest of your directory. Mirror mode also works with a directory structure that's set up as an ftp:// also.
General rule of thumb:
Depending on the side of the site you are doing a mirror of, you're sending many calls to the server. In order to prevent you from being blacklisted or cut off, use the wait option to rate-limit your downloads. If it's a site the side of the one you posted you shouldn't have to, but any large site you're mirroring you'll want to use it:
wget -mpEk --no-parent robots=off --random-wait http://text/root/

CVS export inside a module

I have a checked out a module. It's in /home/user/repositories/repository.
I want to export a folder inside this module. Suppose it's folder3.
/home/user/repositories/repository/folder1/folder2/folder3/
I get into
/home/user/repositories/repository/folder1/folder2/
and try to run
cvs export -r MYTAG -d MY_DIR folder3
But it doesn't work. I get:
-f server: cannot find module `folder3' - ignored
cvs [export aborted]: cannot expand modules
It's possible to export a folder inside a module inside Eclipse, or other visual editors. Which command they call do to it? Is it possible to "log" the executed command in visual editors?
Do I have to write the full path to export it, or does exist an alternative? What am I doing wrong? This can't be so difficult, I just want to export an specific folder from a cvs module using command line...
My bad. I can export a tag that tagged a folder inside my module, using
cvs -d :pserver:user#localhost:2401/opt/cvs/ -q export -rProducts-250 -d Products-250 repository/folder1/folder2/folder3/