doxygen+sphinx(breathe) for documentation - doxygen

I am new to doxygen and sphinx usage. I have a requirement to create documents which is
programmed in C language.
The idea is to generate xml files from doxygen for each file and then use breathe as a bridge to sphinx for creating html pages.
I am successful in generating the xml files and able to get the html pages as well.
However, I see that each html file contains all the file contents, rather than each html per file/directory.
ie. dir1 => file1.h and file1.c
dir2 => file2.h and file2.c
Output:
file1.html => file1.xml & file2.xml
file2.html => file1.xml & file2.xml
Expected output
file1.html to contain file1.xml(both header the implementation)
file2.html for file2.xml
Here are the settings:
Doxyfile(doxygen)
GENERATE_XML = YES
conf.py(sphinx)
breathe_projects = { <dir1_name>: <xml_path>,
<dir2_name>: <xml_path> }
Could anybody help me in setting the right configuration to get the expected output please?

For the above requirement, Doxyfile per directory should be created.
This will generate xml files based on Doxyfile
i.e. for the files
dir1 => file1.h and file1.c
dir2 => file2.h and file2.c
create
Doxyfile1 in dir1
Doxyfile2 in dir2
This generates separate index.xml files per directory.
In Sphinx configuration(conf.py), location to xml should be provided
i.e. breathe_projects = { <dir1_name>: <dir1/xml_path>,
<dir2_name>: <dir2/xml_path> }
With the above changes, separate html files - file1.html(with file1.h and file1.c) and
file2.html(with file2.h and file2.c) are generated.

Related

determine if the file is empty and separate them into different file

The goal of my code is to look into a certain folder and create a new text file with a list of names of all the files that aren't empty in that folder written to a new file, and the list of names of all the empty files (no text) into another folder. My current code is only able to create a new text file with a list of names of all the files (regardless of its content) written to a new file. I want to know how to set up if statement regarding the content of the file (array).
function ListFile
dirName = '';
files = dir(fullfile(dirName,'*.txt'));
files = {files.name};
[fid,msg] = fopen(sprintf('output.txt'),'w+t');
assert(fid>=0,msg)
fprintf(fid,'%s\n',files{:});
fclose(fid);
EDIT: The linked solution in Stewie Griffin's comment is way better. Use this!
A simple approach would be to iterate all files, open them, and check their content. Caveat: If you have large files, this approach might be memory intensive.
A possible code for that could look like this:
function ListFile
dirName = '';
files = dir(fullfile(dirName, '*.txt'));
files = {files.name};
fidEmpty = fopen(sprintf('output_empty_files.txt'), 'w+t');
fidNonempty = fopen(sprintf('output_nonempty_files.txt'), 'w+t');
for iFile = 1:numel(files)
content = fileread(files{iFile})
if (isempty(content))
fprintf(fidEmpty, '%s\n', files{iFile});
else
fprintf(fidNonempty, '%s\n', files{iFile});
end
end
fclose(fidEmpty);
fclose(fidNonempty);
I have two non-empty files nonempty1.txt and nonempty2.txt as well as two empty files empty1.txt and empty2.txt. Running this code, I get the following outputs.
Debugging output from fileread:
content =
content =
content = Test
content = Another test
Content of output_empty_files.txt:
empty1.txt
empty2.txt
Content of output_nonempty_files.txt:
nonempty1.txt
nonempty2.txt
Matlab isn't really the optimal tool for this task (although it is capable). To generate the files you're looking for, a command line tool would be much more efficient.
For example, using GNU find you could do
find . -type f -not -empty -ls > notemptyfiles.txt
find . -type f -empty -ls > emptyfiles.txt
to create the text files you desire. Here's a link for doing something comparable using the windows command line. You could also call these functions from within Matlab if you want to using the system command. This would be much faster than iterating over the files from within Matlab.

Save files into different format using original filenames

After reading .csv files in a directory, I want to save each of them into .html files using their original file names. But, my code below brings along the extension (.csv) from the original filenames.
For example,
Original files: File1.csv, File2.csv
Result files: File1.csv.html, File2.csv.html
I want to remove ".csv" from the new file names.
import pandas as pd
import glob, os
os.chdir(r"C:\Users\.....\...")
for counter, file in enumerate(glob.glob("*.csv")):
df = pd.read_csv(file, skipinitialspace=False, sep =',', engine='python')
df.to_file(os.path.basename(file) + ".html")
The code below removed ".csv" but also ".html"
df.to_file(os.path.basename(file + ".html").split('.')[0])
My expectation is:
File1.html, File2.html
EDIT:
Another post [How to get the filename without the extension from a path in Python? suggested how to list existing files without extensions. My issue, however, is to read existing files in a directory and save them using their original file names (excluding original extension) with new extension.

Perl: Run script on multiple files in multiple directories

I have a perl script that reads a .txt and a .bam file, and creates an output called output.txt.
I have a lot of files that are all in different folders, but are only slightly different in the filename and directory path.
All of my txt files are in different subfolders called PointMutation, with the full path being
/Volumes/Lab/Data/Darwin/Patient/[Plate 1/P1H10]/PointMutation
The text(s) in the bracket is the part that changes, But the Patient subfolder contains all of my txt files.
My .bam file is located in a subfolder named DNA with a full path of
/Volumes/Lab/Data/Darwin/Patient/[Plate 1/P1H10]/SequencingData/DNA
Currently how I run this script is go on the terminal
cd /Volumes/Lab/Data/Darwin/Patient/[Plate 1/P1H10]/PointMutation
perl ~/Desktop/Scripts/Perl.pl "/Volumes/Lab/Data/Darwin/Patient/[Plate
1/P1H10]/PointMutation/txtfile.txt" "/Volumes/Lab/Data/Darwin/Patient/[Plate
1/P1H10]/SequencingData/DNA/bamfile.bam"
With only 1 or two files, that is fairly easy, but I would like to automate it once the files get much larger. Also once I run these once, I don't want to do it again, but I will get more information from the same patient, is there a way to block a folder from being read?
I would do something like:
for my $dir (glob "/Volumes/Lab/Data/Darwin/Patient/*/"){
# skip if not a directory
if (! -d $dir) {
next;
}
my $txt = "$dir/PointMutation/txtfile.txt";
my $bam = "$dir/SequencingData/DNA/bamfile.bam";
# ... you magical stuff here
}
This is assuming that all directories under /Volumes/Lab/Data/Darwin/Patient/ follow the convention.
That said, more long term/robust way of organizing analyses with lots of different files all over the place is either 1) organize all files necessary for each analysis under one directory, or 2) to create meta files (i'd use JSON/yaml) which contain the necessary file names.

Adding files in subfolders to sphinx documentation (sphinxcontrib-matlabdomain)

I have a directory containing various folders, with each of them having matlab source files in them. Some of these folders have sub-folders containing matlab source files.
How can I create a TOC tree with Sphinx to contain the sub-folders in a nested way?
For example, when Main-Directory contains conf.py, index.rst, and moduleslist.rst along with the following folder structure:
Folder1
abc.m
def.m
Folder2
Folder2.1
ghi.m
jkl.m
with this index.rst file:
.. toctree::
:maxdepth: 1
moduleslist
and this moduleslist.rst file:
.. toctree::
:maxdepth: 2
Folder1
=========
.. automodule:: Folder1
:members:
Folder2
=========
.. automodule:: Folder2
:members:
But this does not include the sub-folder Folder2.1 and files in it. I have tried adding Folder2/index in index.rst, with the Folder2/index.rst containing the automodule for Folder2.1, which didn't include documentation of ghi.m.
How can I get Sphinx to show nested sub-folders in it's TOC tree?
I've started using Sphinx and have run into this problem on documentation in general (not specific to autodoc feature). This is how I got it to work and have much better control over how the tree is built.
Treat each folder as a separate group. So at the Sphinx root directory you'll have the index.rst file that will look like this:
.. toctree::
:maxdepth: 1
Folder1/index
Folder2/index
I use the maxdepth: 1 so that it only list the major group name.
Under Folder1 and Folder2 you'll need to add additional index.rst files:
#Folder1/index.rst
.. toctree::
:maxdepth: 2
abc.m
def.m
#Folder2/index.rst
.. toctree::
:maxdepth: 2
Folder2.1/index
jkl.m
As a side note, I've setup my index pages so that they either have just group listings (maxdepth: 1) or detail page listings (maxdepth: 2) - I'm sure there is a way to make the folder/index at depth 1 and files at depth 2.
Then within Folder2.1 you'll need your third index:
#Folder2.1/index.rst
.. toctree::
:maxdepth: 2
ghi.m
Here is the Sphinx Docs on Nested toctree and it wasn't as clear. Obviously you'll need the autodoc code for more complicated/deep tree structures.

possible to extract files according filename listed in a text file by using matlab?

i have thousand files in a folder, however, i only need to extract out hundred files from the folder according to the filename listed in a text file into new folder. The filenames in text file is listed as a column..is that possible to be run by using matlab?what is the code shall i need to write? Thanks.
example:
filenames.txt is in the C:\matlab
folder include thousand files is named as BigFiles also in C:\matlab
files to be extracted from BigFiles folder is listed in column as below:
filenames.txt
a1sndh
sd3rfe
rgd4de
sd5erw
please advise...thanks...
Enumerate all files in a folder of a specific type (if needed) using:
%main directory to process
directory = 'to_process';
%enumerate all files (.m in this case)
files = dir(fullfile(directory,'*.m'));
numfiles = length(files);
fprintf('Found %i files\n',numfiles)
Then you could load the single column using one of the many file I/O functions in Matlab.
Then just loop through all the input names and check it's name against all the read in files (files{i}.name), and if so, move it.
EDIT:
From what I understood, you are looking for a solution along the lines:
filenames.txt
a.txt
b.txt
c.txt
.
.
.
moveMyFiles.m
%# read filenames listed in a text file
fid = fopen('C:\matlab\filenames.txt');
fList = textscan(fid, '%s');
fList = fList{1};
fclose(fid);
%# source/destination folder names
sourceDir = 'C:\matlab\BigFiles';
destDir = 'C:\matlab\out';
if ~exist(destDir,'dir')
mkdir(destDir);
end
%# move files one by one
for i=1:numel(fList)
movefile(fullfile(sourceDir,fList{i}), fullfile(destDir,fList{i}));
end
You can replace the MOVEFILE function by COPYFILE if you simply want to copy the files instead of moving them...