I have a directory containing various folders, with each of them having matlab source files in them. Some of these folders have sub-folders containing matlab source files.
How can I create a TOC tree with Sphinx to contain the sub-folders in a nested way?
For example, when Main-Directory contains conf.py, index.rst, and moduleslist.rst along with the following folder structure:
Folder1
abc.m
def.m
Folder2
Folder2.1
ghi.m
jkl.m
with this index.rst file:
.. toctree::
:maxdepth: 1
moduleslist
and this moduleslist.rst file:
.. toctree::
:maxdepth: 2
Folder1
=========
.. automodule:: Folder1
:members:
Folder2
=========
.. automodule:: Folder2
:members:
But this does not include the sub-folder Folder2.1 and files in it. I have tried adding Folder2/index in index.rst, with the Folder2/index.rst containing the automodule for Folder2.1, which didn't include documentation of ghi.m.
How can I get Sphinx to show nested sub-folders in it's TOC tree?
I've started using Sphinx and have run into this problem on documentation in general (not specific to autodoc feature). This is how I got it to work and have much better control over how the tree is built.
Treat each folder as a separate group. So at the Sphinx root directory you'll have the index.rst file that will look like this:
.. toctree::
:maxdepth: 1
Folder1/index
Folder2/index
I use the maxdepth: 1 so that it only list the major group name.
Under Folder1 and Folder2 you'll need to add additional index.rst files:
#Folder1/index.rst
.. toctree::
:maxdepth: 2
abc.m
def.m
#Folder2/index.rst
.. toctree::
:maxdepth: 2
Folder2.1/index
jkl.m
As a side note, I've setup my index pages so that they either have just group listings (maxdepth: 1) or detail page listings (maxdepth: 2) - I'm sure there is a way to make the folder/index at depth 1 and files at depth 2.
Then within Folder2.1 you'll need your third index:
#Folder2.1/index.rst
.. toctree::
:maxdepth: 2
ghi.m
Here is the Sphinx Docs on Nested toctree and it wasn't as clear. Obviously you'll need the autodoc code for more complicated/deep tree structures.
Related
I have a requirement where i have a list of files to be renamed based on a pattern . There are multiple such patterns.
eg:
file1_type1.txt
file2_type1.txt
file1_type2.txt
file2.type2.txt
file.type3.txt
I want to rename the above files as :
1.a_(namingconvention)type1.txt
1.b(namingconvention)type1.txt
2.a(namingconvention)type2.txt
2.b(namingconvention)_type2.txt
3.(namingconvention)_type3.txt
logic to rename should be:
Look at the type of file -> type1, type2, type3 etc ...
if multiple files are present, first file to be taken as 1.a-(namingconvention)
namingconvention should be a variable.
Please help me out as I am not familiar with powershell.
I have a folderstructure on which I want to detect duplicate foldernames e.g.
C:\APP10001.001
C:\APP10001.002
C:\APP10002.001
C:\APP10003.002
C:\APP10003.003
C:\APP10003.004
C:\APP10004.002
In this case I want to detect that there are folders with the same name (wihtout the folder extension), C:\APP10001 and C:\APP10003.
Output should look like:
APP10001 2
APP10003 3
I have a perl script that reads a .txt and a .bam file, and creates an output called output.txt.
I have a lot of files that are all in different folders, but are only slightly different in the filename and directory path.
All of my txt files are in different subfolders called PointMutation, with the full path being
/Volumes/Lab/Data/Darwin/Patient/[Plate 1/P1H10]/PointMutation
The text(s) in the bracket is the part that changes, But the Patient subfolder contains all of my txt files.
My .bam file is located in a subfolder named DNA with a full path of
/Volumes/Lab/Data/Darwin/Patient/[Plate 1/P1H10]/SequencingData/DNA
Currently how I run this script is go on the terminal
cd /Volumes/Lab/Data/Darwin/Patient/[Plate 1/P1H10]/PointMutation
perl ~/Desktop/Scripts/Perl.pl "/Volumes/Lab/Data/Darwin/Patient/[Plate
1/P1H10]/PointMutation/txtfile.txt" "/Volumes/Lab/Data/Darwin/Patient/[Plate
1/P1H10]/SequencingData/DNA/bamfile.bam"
With only 1 or two files, that is fairly easy, but I would like to automate it once the files get much larger. Also once I run these once, I don't want to do it again, but I will get more information from the same patient, is there a way to block a folder from being read?
I would do something like:
for my $dir (glob "/Volumes/Lab/Data/Darwin/Patient/*/"){
# skip if not a directory
if (! -d $dir) {
next;
}
my $txt = "$dir/PointMutation/txtfile.txt";
my $bam = "$dir/SequencingData/DNA/bamfile.bam";
# ... you magical stuff here
}
This is assuming that all directories under /Volumes/Lab/Data/Darwin/Patient/ follow the convention.
That said, more long term/robust way of organizing analyses with lots of different files all over the place is either 1) organize all files necessary for each analysis under one directory, or 2) to create meta files (i'd use JSON/yaml) which contain the necessary file names.
I am new to doxygen and sphinx usage. I have a requirement to create documents which is
programmed in C language.
The idea is to generate xml files from doxygen for each file and then use breathe as a bridge to sphinx for creating html pages.
I am successful in generating the xml files and able to get the html pages as well.
However, I see that each html file contains all the file contents, rather than each html per file/directory.
ie. dir1 => file1.h and file1.c
dir2 => file2.h and file2.c
Output:
file1.html => file1.xml & file2.xml
file2.html => file1.xml & file2.xml
Expected output
file1.html to contain file1.xml(both header the implementation)
file2.html for file2.xml
Here are the settings:
Doxyfile(doxygen)
GENERATE_XML = YES
conf.py(sphinx)
breathe_projects = { <dir1_name>: <xml_path>,
<dir2_name>: <xml_path> }
Could anybody help me in setting the right configuration to get the expected output please?
For the above requirement, Doxyfile per directory should be created.
This will generate xml files based on Doxyfile
i.e. for the files
dir1 => file1.h and file1.c
dir2 => file2.h and file2.c
create
Doxyfile1 in dir1
Doxyfile2 in dir2
This generates separate index.xml files per directory.
In Sphinx configuration(conf.py), location to xml should be provided
i.e. breathe_projects = { <dir1_name>: <dir1/xml_path>,
<dir2_name>: <dir2/xml_path> }
With the above changes, separate html files - file1.html(with file1.h and file1.c) and
file2.html(with file2.h and file2.c) are generated.
I have one dir with 50 folders, and each folder has 50 files. I have a script to read all files in each folder and save the results, but I need to type the folder name every time. Is there any loop or batch tools I can use? Any suggestions or code greatly appreciated.
There may be a cleaner way to do it, but the output of the dir command can be assigned to a variable. This gives you a struct, with the pertinent fields being name and isdir. For instance, assuming that the top-level directory (the one with 50 files) only has folders in it, the following will give you the first folder's name:
folderList = dir();
folderList(3).name
(Note that the first two entries in the folderList struct will be for "." (the current directory) and ".." (the parent directory), so if you want the first directory with files in it you have to go to the third entry). If you wish to go through the folders one by one, you can do something like the following:
folderList = dir();
for i = 3:length(folderList)
curr_directory = pwd;
cd(folderList(i).name); % changes directory to the next working directory
% operate with files as if you were in that directory
cd(curr_directory); % return to the top-level directory
end
If the top-level directory contains files as well as folders, then you need to check the isdir of each entry in the folderList struct--if it is "1", it's a directory, if it is "0", it's a file.