Matlab create new Microsoft Access Database file *.accdb - matlab

I have used the following code pattern to access my *.accdb files:
accdb_path='C:\path\to\accdb\file\wbe3.accdb';
accdb_url= [ 'jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DSN='''';DBQ=' accdb_path ];
conn = database('','','','sun.jdbc.odbc.JdbcOdbcDriver',accdb_url);
If instead I want to create a new *.accdb file, how would I do that? There is much on the web about how to connect, but I haven't found how to create the *.accdb file itself.
In case it matters, I want to be able to execute SQL 92 syntax. I am using Matlab 2015b. I do not want to use the Matlab GUI for exploring databases.

Actually, what you are attempting to do can be very tricky to achieve. It may require a direct interface to Access through an ActiveX control and I'm not even sure it can be done. It seems that the web is lacking a solid information pool concerning Access interoperability.
One quick workaround I can suggest you, althrough miserable, is to manually create an empty ACCDB file that you can use as template and then duplicate it whenever a new database must be created:
conn = CreateDB('C:\PathB\wbe3.accdb');
function accdb_conn = CreateDB(accdb_path)
status = copyfile('C:\PathA\template.accdb',accdb_path,'f');
if (status)
accdb_url = ['jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DSN='';DBQ=' accdb_path];
accdb_conn = database('','','','sun.jdbc.odbc.JdbcOdbcDriver',accdb_url);
else
accdb_conn = [];
error(['Could not duplicate the ACCDB template to the directory "' accdb_path '".']);
end
end

The following example is based on Tommaso's answer, which provides code for copying an empty *.accdb file and connecting to the copy. Based on an afternoon of trial, error, perusing of the web/help, I've expanded on that to create a database table and export a Matlab table to it. I've also embedded comments showing where modifications are needed, presumably due to my older 2015b version of Matlab, error catching constructs, and caveats in the file copy.
srcPath = [pwd '/emptyFile.accdb']; % Source
tgtPath = [pwd '/new.accdb']; % Target
cpyStatOk = copyfile( srcPath, tgtPath );
% No warning B4 clobber target file
if cpyStatOk
accdb_url= [ ...
'jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DSN='''';DBQ=' ...
tgtPath ];
conn = database('','','','sun.jdbc.odbc.JdbcOdbcDriver',accdb_url);
else
error('Couldn''t copy %s to %s',srcPath,tgtPath);
end % if cpyStatOk
try
% conn.Execute(['CREATE TABLE tstMLtbl2accdb ' ... Not for 2015b
curs = conn.exec(['CREATE TABLE tstMLtbl2accdb ' ...
'( NumCol INTEGER, StrCol VARCHAR(255) );']);
if ~isempty( curs.Message )
% fprintf(2,'%s: %s\n',mfilename,curs.Message);
error('%s: %s\n',mfilename,curs.Message);
% Trigger `catch` & close(conn)
end %if
% sqlwrite( conn, 'tstMLtbl2accdb', ...Not supported in 2015b
datainsert( conn, 'tstMLtbl2accdb', {'NumCol','StrCol'}, ...
table( floor(10*rand(5,1)), {'abba';'cadabra';'dog';'cat';'mouse'}, ...
'VariableNames',{'NumCol','StrCol'} ) );
catch xcptn
close(conn)
fprintf(2,'Done `catch xcptn`\n');
rethrow(xcptn);
end % try
%
% Other database manipulations here
%
close(conn)
disp(['Done ' mfilename]);
This has been immensely educational for myself, and I hope it is useful for others considering the use of SQL as an alternative to the more code-heavy Matlab counterpart to relational database manipulations. With this amount of overhead, I'd have to say that it is not attractive to perform SQL manipulations on data residing in the Matlab workspace except where one really needs the hyperoptimization of relational database query engines.
To those savvy with interfacing to Access, your comment on the purpose of the field names argument of the datainsert function would be appreciated. It is dubbed colnames in the documentation. From testing, the field names and number of columns must match between the existing target table in Access and the source table in Matlab. So the field names argument doesn't seem to serve any purpose. The help documentation isn't all that helpful.
AFTERNOTE: I've composed a "specification" for the colnams argument based on examples from TMW. TMW has confirmed this explanation:
The colnames argument tells the external database environment the names and order of fields of the data container supplied via the data argument. These field names are used to match the fields of the transferred data with fields in the table tablename residing within the external database environment. Because of this explicit name matching, the order of the fields in data do not have to match the order of the fields in tablename.
If I find any departures of empirical behaviour from the above "specification", I will update this answer.

Related

Exporting the output of MATLAB's methodsview

MATLAB's methodsview tool is handy when exploring the API provided by external classes (Java, COM, etc.). Below is an example of how this function works:
myApp = actxserver('Excel.Application');
methodsview(myApp)
I want to keep the information in this window for future reference, by exporting it to a table, a cell array of strings, a .csv or another similar format, preferably without using external tools.
Some things I tried:
This window allows selecting one line at a time and doing "Ctrl+c Ctrl+v" on it, which results in a tab-separated text that looks like this:
Variant GetCustomListContents (handle, int32)
Such a strategy can work when there are only several methods, but not viable for (the usually-encountered) long lists.
I could not find a way to access the table data via the figure handle (w/o using external tools like findjobj or uiinspect), as findall(0,'Type','Figure') "do not see" the methodsview window/figure at all.
My MATLAB version is R2015a.
Fortunately, methodsview.m file is accessible and allows to get some insight on how the function works. Inside is the following comment:
%// Internal use only: option is optional and if present and equal to
%// 'noUI' this function returns methods information without displaying
%// the table. `
After some trial and error, I saw that the following works:
[titles,data] = methodsview(myApp,'noui');
... and returns two arrays of type java.lang.String[][].
From there I found a couple of ways to present the data in a meaningful way:
Table:
dataTable = cell2table(cell(data));
dataTable.Properties.VariableNames = matlab.lang.makeValidName(cell(titles));
Cell array:
dataCell = [cell(titles).'; cell(data)];
Important note: In the table case, the "Return Type" column title gets renamed to ReturnType, since table titles have to be valid MATLAB identifiers, as mentioned in the docs.

Assigning a whole DataStructure its nullind array

Some context before the question.
Imagine file FileA having around 50 fields of different types. Instead of all programs using the file, I tried having a service program, so the file could only be accessed by that service program. The programs calling the service would then receive a DataStructure based on the file structure, as an ExtName. I use SQL to recover the information, so, basically, the procedure would go like this :
Datastructure shared by service program :
D FileADS E DS ExtName(FileA) Qualified
Procedure called by programs :
P getFileADS B Export
D PI N
D PI_IDKey 9B 0 Const
D PO_DS LikeDS(FileADS)
D LocalDS E DS ExtName(FileA) Qualified
D NullInd S 5i 0 Array(50) <-- Since 50 fields in fileA
//Code
Clear LocalDS;
Clear PO_DS;
exec sql
SELECT *
INTO :LocalDS :nullind
FROM FileA
WHERE FileA.ID = :PI_IDKey;
If SqlCod <> 0;
Return *Off;
EndIf;
PO_DS = LocalDS;
Return *On;
P getFileADS E
So, that procedure will return a datastructure filled with a record from FileA if it finds it.
Now my question : Is there any way I can assign the %nullind(field) = *On without specifying EACH 50 fields of my file?
Something like a loop
i = 1;
DoW (i <= 50);
if nullind(i) = -1;
%nullind(datastructure.field) = *On;
endif;
i++;
EndDo;
Cause let's face it, it'd be a pain to look each fields of each file every time.
I know a simple chain(n) could do the trick
chain(n) PI_IDKey FileA FileADS;
but I really was looking to do it with SQL.
Thank you for your advices!
OS Version : 7.1
First, you'll be better off in the long run by eliminating SELECT * and supplying a SELECT list of the 50 field names.
Next, consider these two web pages -- Meaningful Names for Null Indicators and Embedded SQL and null indicators. The first shows an example of assigning names to each null indicator to match the associated field names. It's just a matter of declaring a based DS with names, based on the address of your null indicator array. The second points out how a null indicator array can be larger than needed, so future database changes won't affect results. (Bear in mind that the page shows a null array of 1000 elements, and the memory is actually relatively tiny even at that size. You can declare it smaller if you think it's necessary for some reason.)
You're creating a proc that you'll only write once. It's not worth saving the effort of listing the 50 fields. Maybe if you had many programs using this proc and you had to create the list each time it'd be a slight help to use SELECT *, but even then it's not a great idea.
A matching template DS for the 50 data fields can be defined in the /COPY member that will hold the proc prototype. The template DS will be available in any program that brings the proc prototype in. Any program that needs to call the proc can simply specify LIKEDS referencing the template to define its version in memory. The template DS should probably include the QUALIFIED keyword, and programs would then use their own DS names as the qualifying prefix. The null indicator array can be handled similarly.
However, it's not completely clear what your actual question is. You show an example loop and ask if it'll work, but you don't say if you had a problem with it. It's an array, so a loop can be used much like you show. But it depends on what you're actually trying to accomplish with it.
for old school rpg just include the nulls in the data structure populated with the select statement.
select col1, ifnull(col1), col2, ifnull(col2), etc. into :dsfilewithnull where f.id = :id;
for old school rpg that can't handle nulls remove them with the select statement.
select coalesce(col1,0), coalesce(col2,' '), coalesce(col3, :lowdate) into :dsfile where f.id = :id;
The second method would be easier to use in a legacy environment.
pass the key by value to the procedure so you can use it like a built in function.
One answer to your question would be to make the array part of a data structure, and assign *all'0' to the data structure.
dcl-ds nullIndDs;
nullInd Ind Dim(50);
end-ds;
nullIndDs = *all'0';
The answer by jmarkmurphy is an example of assigning all zeros to an array of indicators. For the example that you show in your question, you can do it this way:
D NullInd S 5i 0 dim(50)
/free
NullInd(*) = 1 ;
Nullind(*) = 0 ;
*inlr = *on ;
return ;
/end-free
That's a complete program that you can compile and test. Run it in debug and stop at the first statement. Display NullInd to see the initial value of its elements. Step through the first statement and display it again to see how the elements changed. Step through the next statement to see how things changed again.
As for "how to do it in SQL", that part doesn't make sense. SQL sets the values automatically when you FETCH a row. Other than that, the array is used by the host language (RPG in this case) to communicate values back to SQL. When a SQL statement runs, it again automatically uses whatever values were set. So, it either is used automatically by SQL for input or output, or is set by your host language statements. There is nothing useful that you can do 'in SQL' with that array.

Reading structured variable from MAT file

I am performing an analysis which involves simulation of over 1000 cases. I extracting lots of data for each case as well (about 70MB). Currently I am saving the results for each case as:
Vessel.TotalForce
Vessel.WindForce
Vessel.CurrentForce
Vessel.WaveForce
Vessel.ConnectionForce
...
Line1.EffectiveTension
Line1.X
Line1.Y
Line2.EfectiveTension
Line2.X
Line2.Y
...
save('CaseNo1.mat')
Now, I need to perform my analysis for CaseNo1.mat to CaseNo1000. Initially I planned to create a Database.mat file by loading all cases in it and then accessing any variable using h5read. This way Matlab doesn't need to load all the data at a time. However, I am concerned now that my database file will be too big.
Is there any way I can read the structured variables from individual case files for example CaseNo1.mat without loading the CaseNo1.mat file in memory.
Matlab examples shows loading just the variables directly from MAT file without loading the whole MAT file. But I am not sure how to read structures data the same way.
x=load('CaseNo1.mat','Line1.X')
says Line1.X not found. But it's there. The command is not correct to access the data. Also tried using h5read, but it says CaseNo1.mat is not an HDF5 file.
Can anyone help with this.
Apart from this, I would also appreciate if there is any suggestion about performing such data intensive analysis.
I was wrong! I'm leaving my old answer for context, though I've edited it to reference this one. I thought I had used matfile() in that way before, but I hadn't. I just did a thorough search and ran a few test cases. You've actually run into a limitation of the way Matlab handles and references structures stored in .mat files. There is, however, a solution. It does involve some refactoring of your original code, but it shouldn't be too egregious.
Vessel_TotalForce
Vessel_WindForce
Vessel_CurrentForce
Vessel_WaveForce
Vessel_ConnectionForce
...
Line1_EffectiveTension
Line1_X
Line1_Y
Line2_EfectiveTension
Line2_X
Line2_Y
...
save('CaseNo1.mat')
Then to access, just use matfile (or load) as you were before. Like so:
Vessel_WaveForce = load('CaseNo1.mat'', 'Vessel_WaveForce')
It's important to note that this restriction doesn't appear to be caused by anything you've chosen to do in your program, but rather is imposed by the way Matlab interacts with it's native storage files when they contain structures.
EDIT: This answer works, but doesn't actually solve the problem posed in OP's question. I thought I had used matfile to generate a handle that I could access, but I was wrong. See my other answer for details.
You could use matfile, like so:
myMatFileHandle = matfile('caseNo1.mat');
thisVessel = myMatFileHandle.vessel;
Also, from the little bit I can see, you seem to be on the right track for high-volume analysis. Just remember to use sparse when applicable, and generally avoid conditionals inside of loops if possible.
Good luck!
The objective of storing data in structured format is:
To be organized
Easy scripting post processor where looping through data under one data set it required.
To store structured dataset containing integer, floating and string variables in MAT file and to be able to read just the required variable using h5read command was sought. Matlab load command is not able to read variable beyond first level from stored data in a MAT file. The h5write couldn't write string variables. Hence needed a work around to solve this problem.
To do this I have used following method:
filename = 'myMatFile';
Vessel.TotalForce = %store some data
Vessel.WindForce = %store some data
Vessel.CurrentForce = %store some data
Vessel.WaveForce = %store some data
Vessel.ConnectionForce = %store some data
...
Lin1.LineType = 'Wire'
Line1.ArcLength_0.EffectiveTension = %store some data
Line1.ArcLength_50.EffectiveTension= %store some data
Line1.ArcLength_100.EffectiveTension= %store some data
Lin2.LineType = 'Chain'
Line2.ArcLength_0.EffectiveTension= %store some data
Line2.ArcLength_50.EffectiveTension= %store some data
Line2.ArcLength_100.EffectiveTension= %store some data
save([filename '_temp.mat']);
PointToMat=matfile([filename '.mat'],'Writable',true);
PointToMat.(char(filename)) = load([filename '_temp.mat']);
delete([filename '_temp.mat']);
Now to read from the MAT file created, we can use h5read as usual. To extract the EffectiveTension for Line1, ArcLength_0:
EffectiveTension = h5read([filename '.mat'],['/' filename '/Line1/ArcLength_0/EffectiveTension']);
For string variables, h5read returns decimal values corresponding to each character. To obtain the actual string I used:
name = char(h5read([filename '.mat'],['/' filename '/Line1/LineType']));
Tried this method on my data set which is about 200MB and I could process them pretty fast. Hope this would help someone someday.
Short answer:
Having saved the data into a MAT file with the '-v7.3' option, use something like h5read(filename, '/Line2/X') to read just one structure field. You can even read an array partially, for example:
s.a = 1:100;
save('test.mat', '-v7.3', 's');
clear
h5read('test.mat', '/s/a', [1 10], [1 5], [1 3])
returns each third element of the 1:100 array, starting with the 10th element and returning 5 values:
10 13 16 19 22
Long answer:
See answer by #Amitava for the more elaborate code and topic coverage.

SQLAlchemy, Psycopg2 and Postgresql COPY

It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted

Call graph generation from matlab src code

I am trying to create a function call graph for around 500 matlab src files. I am unable to find any tools which could help me do the same for multiple src files.
Is anyone familiar with any tools or plugins?
In case any such tools are not available, any suggestions on reading 6000 lines of matlab code
without documentation is welcome.
Let me suggest M2HTML, a tool to automatically generate HTML documentation of your MATLAB m-files. Among its feature list:
Finds dependencies between functions and generates a dependency graph (using the dot tool of GraphViz)
Automatic cross-referencing of functions and subfunctions with their definition in the source code
Check out this demo page to see an example of the output of this tool.
I recommend looking into using the depfun function to construct a call graph. See http://www.mathworks.com/help/techdoc/ref/depfun.html for more information.
In particular, I've found that calling depfun with the '-toponly' argument, then iterating over the results, is an excellent way to construct a call graph by hand. Unfortunately, I no longer have access to any of the code that I've written using this.
I take it you mean you want to see exactly how your code is running - what functions call what subfunctions, when, and how long those run for?
Take a look at the MATLAB Code Profiler. Execute your code as follows:
>> profile on -history; MyCode; profile viewer
>> p = profile('info');
p contains the function history, From that same help page I linked above:
The history data describes the sequence of functions entered and exited during execution. The profile command returns history data in the FunctionHistory field of the structure it returns. The history data is a 2-by-n array. The first row contains Boolean values, where 0 means entrance into a function and 1 means exit from a function. The second row identifies the function being entered or exited by its index in the FunctionTable field. This example [below] reads the history data and displays it in the MATLAB Command Window.
profile on -history
plot(magic(4));
p = profile('info');
for n = 1:size(p.FunctionHistory,2)
if p.FunctionHistory(1,n)==0
str = 'entering function: ';
else
str = 'exiting function: ';
end
disp([str p.FunctionTable(p.FunctionHistory(2,n)).FunctionName])
end
You don't necessarily need to display the entrance and exit calls like the above example; just looking at p.FunctionTable and p.FunctionHistory will suffice to show when code enters and exits functions.
There are already a lot of answers to this question.
However, because I liked the question, and I love to procrastinate, here is my take at answering this (It is close to the approach presented by Dang Khoa, but different enough to be posted, in my opinion):
The idea is to run the profile function, along with a digraph to represent the data.
profile on
Main % Code to be analized
p = profile('info');
Now p is a structure. In particular, it contains the field FunctionTable, which is a structure array, where each structure contains information about one of the calls during the execution of Main.m. To keep only the functions, we will have to check, for each element in FunctionTable, if it is a function, i.e. if p.FunctionTable(ii).Type is 'M-function'
In order to represent the information, let's use a MATLAB's digraph object:
N = numel(p.FunctionTable);
G = digraph;
G = addnode(G,N);
nlabels = {};
for ii = 1:N
Children = p.FunctionTable(ii).Children;
if ~isempty(Children)
for jj = 1:numel(Children)
G = addedge(G,ii,Children(jj).Index);
end
end
end
Count = 1;
for ii=1:N
if ~strcmp(p.FunctionTable(ii).Type,'M-function') % Keep only the functions
G = rmnode(G,Count);
else
Nchars = min(length(p.FunctionTable(ii).FunctionName),10);
nlabels{Count} = p.FunctionTable(ii).FunctionName(1:Nchars);
Count = Count + 1;
end
end
plot(G,'NodeLabel',nlabels,'layout','layered')
G is a directed graph, where node #i refers to the i-th element in the structure array p.FunctionTable where an edge connects node #i to node #j if the function represented by node #i is a parent to the one represented by node #j.
The plot is pretty ugly when applied to my big program but it might be nicer for smaller functions:
Zooming in on a subpart of the graph:
I agree with the m2html answer, I just wanted to say the following the example from the m2html/mdot documentation is good:
mdot('m2html.mat','m2html.dot');
!dot -Tps m2html.dot -o m2html.ps
!neato -Tps m2html.dot -o m2html.ps
But I had better luck with exporting to pdf:
mdot('m2html.mat','m2html.dot');
!dot -Tpdf m2html.dot -o m2html.pdf
Also, before you try the above commands you must issue something like the following:
m2html('mfiles','..\some\dir\with\code\','htmldir','doc_dir','graph','on')
I found the m2html very helpful (in combination with the Graphviz software). However, in my case I wanted to create documentation of a program included in a folder but ignoring some subfolders and .m files. I found that, by adding to the m2html call the "ignoreddir" flag, one can make the program ignore some subfolders. However, I didn't find an analogue flag for ignoring .m files (neither does the "ignoreddir" flag do the job). As a workaround, adding the following line after line 1306 in the m2html.m file allows for using the "ignoreddir" flag for ignoring .m files as well:
d = {d{~ismember(d,{ignoredDir{:}})}};
So, for instance, for generating html documentation of a program included in folder "program_folder" but ignoring "subfolder_1" subfolder and "test.m" file, one should execute something like this:
m2html( 'mfiles', 'program_folder', ... % set program folder
'save', 'on', ... % provide the m2html.mat
'htmldir', './doc', ... % set doc folder
'graph', 'on', ... % produce the graph.dot file to be used for the visualization, for example, as a flux/block diagram
'recursive', 'on', ... % consider also all the subfolders inside the program folders
'global', 'on', ... % link also calls between functions in different folders, i.e., do not link only the calls for the functions which are in the same folder
'ignoreddir', { 'subfolder_1' 'test.m' } ); % ignore the following folders/files
Please note that all subfolders with name "subfolder_1" and all files with name "test.m" inside the "program_folder" will be ignored.