I'm using the checkcode function in MATLAB to give me a struct of all error messages in a supplied filename along with their McCabe complexity and ID associated with that error. i.e;
info = checkcode(fileName, '-cyc','-id');
In MATLAB's preferences, there is a list of all possible errors, and they are broken down into categories. Such as "Aesthetics and Readability", "Syntax Errors", "Discouraged Function Usage" etc.
Is there a way to access these categories using the error ID gained from the above line of code?
I tossed around different ideas in my head for this question and was finally able to come up with a mostly elegant solution for how to handle this.
The Solution
The critical component of this solution is the undocumented -allmsg flag of checkcode (or mlint). If you supply this argument, then a full list of mlint IDs, severity codes, and descriptions are printed. More importantly, the categories are also printed in this list and all mlint IDs are listed underneath their respective mlint category.
The Execution
Now we can't simply call checkcode (or mlint) with only the -allmsg flag because that would be too easy. Instead, it requires an actual file to try to parse and check for errors. You can pass any valid m-file, but I have opted to pass the built-in sum.m because the actual file itself only contains help information (as it's real implementation is likely C++) and mlint is therefore able to parse it very rapidly with no warnings.
checkcode('sum.m', '-allmsg');
An excerpt of the output printed to the command window is:
INTER ========== Internal Message Fragments ==========
MSHHH 7 this is used for %#ok and should never be seen!
BAIL 7 done with run due to error
INTRN ========== Serious Internal Errors and Assertions ==========
NOLHS 3 Left side of an assignment is empty.
TMMSG 3 More than 50,000 Code Analyzer messages were generated, leading to some being deleted.
MXASET 4 Expression is too complex for code analysis to complete.
LIN2L 3 A source file line is too long for Code Analyzer.
QUIT 4 Earlier syntax errors confused Code Analyzer (or a possible Code Analyzer bug).
FILER ========== File Errors ==========
NOSPC 4 File <FILE> is too large or complex to analyze.
MBIG 4 File <FILE> is too big for Code Analyzer to handle.
NOFIL 4 File <FILE> cannot be opened for reading.
MDOTM 4 Filename <FILE> must be a valid MATLAB code file.
BDFIL 4 Filename <FILE> is not formed from a valid MATLAB identifier.
RDERR 4 Unable to read file <FILE>.
MCDIR 2 Class name <name> and #directory name do not agree: <FILE>.
MCFIL 2 Class name <name> and file name do not agree: <file>.
CFERR 1 Cannot open or read the Code Analyzer settings from file <FILE>. Using default settings instead.
...
MCLL 1 MCC does not allow C++ files to be read directly using LOADLIBRARY.
MCWBF 1 MCC requires that the first argument of WEBFIGURE not come from FIGURE(n).
MCWFL 1 MCC requires that the first argument of WEBFIGURE not come from FIGURE(n) (line <line #>).
NITS ========== Aesthetics and Readability ==========
DSPS 1 DISP(SPRINTF(...)) can usually be replaced by FPRINTF(...).
SEPEX 0 For better readability, use newline, semicolon, or comma before this statement.
NBRAK 0 Use of brackets [] is unnecessary. Use parentheses to group, if needed.
...
The first column is clearly the mlint ID, the second column is actually a severity number (0 = mostly harmless, 1 = warning, 2 = error, 4-7 = more serious internal issues), and the third column is the message that is displayed.
As you can see, all categories also have an identifier but no severity, and their message format is ===== Category Name =====.
So now we can just parse this information and create some data structure that allows us to easily look up the severity and category for a given mlint ID.
Again, though, it can't always be so easy. Unfortunately, checkcode (or mlint) simply prints this information out to the command window and doesn't assign it to any of our output variables. Because of this, it is necessary to use evalc (shudder) to capture the output and store it as a string. We can then easily parse this string to get the category and severity associated with each mlint ID.
An Example Parser
I have put all of the pieces I discussed previously together into a little function which will generate a struct where all of the fields are the mlint IDs. Within each field you will receive the following information:
warnings = mlintCatalog();
warnings.DWVRD
id: 'DWVRD'
severity: 2
message: 'WAVREAD has been removed. Use AUDIOREAD instead.'
category: 'Discouraged Function Usage'
category_id: 17
And here's the little function if you're interested.
function [warnings, categories] = mlintCatalog()
% Get a list of all categories, mlint IDs, and severity rankings
output = evalc('checkcode sum.m -allmsg');
% Break each line into it's components
lines = regexp(output, '\n', 'split').';
pattern = '^\s*(?<id>[^\s]*)\s*(?<severity>\d*)\s*(?<message>.*?\s*$)';
warnings = regexp(lines, pattern, 'names');
warnings = cat(1, warnings{:});
% Determine which ones are category names
isCategory = cellfun(#isempty, {warnings.severity});
categories = warnings(isCategory);
% Fix up the category names
pattern = '(^\s*=*\s*|\s*=*\s*$)';
messages = {categories.message};
categoryNames = cellfun(#(x)regexprep(x, pattern, ''), messages, 'uni', 0);
[categories.message] = categoryNames{:};
% Now pair each mlint ID with it's category
comp = bsxfun(#gt, 1:numel(warnings), find(isCategory).');
[category_id, ~] = find(diff(comp, [], 1) == -1);
category_id(end+1:numel(warnings)) = numel(categories);
% Assign a category field to each mlint ID
[warnings.category] = categoryNames{category_id};
category_id = num2cell(category_id);
[warnings.category_id] = category_id{:};
% Remove the categories from the warnings list
warnings = warnings(~isCategory);
% Convert warning severity to a number
severity = num2cell(str2double({warnings.severity}));
[warnings.severity] = severity{:};
% Save just the categories
categories = rmfield(categories, 'severity');
% Convert array of structs to a struct where the MLINT ID is the field
warnings = orderfields(cell2struct(num2cell(warnings), {warnings.id}));
end
Summary
This is a completely undocumented but fairly robust way of getting the category and severity associated with a given mlint ID. This functionality existed in 2010 and maybe even before that, so it should work with any version of MATLAB that you have to deal with. This approach is also a lot more flexible than simply noting what categories a given mlint ID is in because the category (and severity) will change from release to release as new functions are added and old functions are deprecated.
Thanks for asking this challenging question, and I hope that this answer provides a little help and insight!
Just to close this issue off. I've managed to extract the data from a few different places and piece it together. I now have an excel spreadsheet of all matlab's warnings and errors with columns for their corresponding ID codes, category, and severity (ie, if it is a warning or error). I can now read this file in, look up ID codes I get from using the 'checkcode' function and draw out any information required. This can now be used to create analysis tools to look at the quality of written scripts/classes etc.
If anyone would like a copy of this file then drop me a message and I'll be happy to provide it.
Darren.
Related
With code like:
fopen("DD:LOGLIBY(L1234567)", "w");
and JCL like:
//LOGTEST EXEC PGM=LOGTEST
//LOGLIBY DD DSN=MYUSER.LOG.LIBY,DISP=SHR
I can create PDS(E) members while at the same time browsing the PDS(E) to look at existing members, as expected with DISP=SHR.
If instead I code:
fopen("//'MYUSER.LOG.LIBY(L1234567)'", "w");
The fopen fails if I am browsing the PDS(E) at the time, or the browse of the PDS(E) fails while I have the file open. In other words there is no DISP=SHR. According to the fopen() documentation, DISP=SHR is the default when using file modes of "r" etc, but not "w".
How can I provide DISP=SHR in the second example?
There are two possibilities...
For anyone not familiar with the internal structure of partitioned datasets (that is, PDS or PDS/E), these datasets are logically divided into two parts: a "directory" having pointers to all the individual members, and a "data" area containing the actual records for the individual members:
PDS: <DIRECTORY BLOCKS>
<MEMBER1>: ADDRESS OF DATA FOR MEMBER1 (xxx)
<MEMBER2>: ADDRESS OF DATA FOR MEMBER2 (yyy)
...
<DIRECTORY FREESPACE)
...
<EOF - END OF THE PDS DIRECTORY>
<DATA PORTION>
+xxx = DATA FOR MEMBER1
...
<EOF - END OF MEMBER1>
+yyy = DATA FOR MEMBER2
...
<EOF - END OF MEMBER2>
...
FREE SPACE (ALLOCATED, BUT UNUSED)
...
END OF PDS
Throughout the next few paragraphs, keep in mind that you can open either the entire PDS/PDSE, and this enables you to read/write whatever members you like, or you can allocate and open a single member, and that gets processed like any other sequential file.
First, if you actually to have a DD statement coded as you show in the question, then you may simply need to change your open from fopen(dsn,...) to fopen(dd:ddname,...). If you're running under the UNIX Shell or you do something that results in your process running in a different address space (such as fork()), then this might not work, but it may be worth a try. If you do this with the JCL you show, the challenge would be managing the PDS/E directory - you'd need to issue your own "STOW" when you create/update a new member since the JCL allocates the entire dataset, not just a single member. The sequence would be:
Open the DD for output.
Write your data.
Update the PDS or PDS/E directory with new member information (this is where the STOW function comes in - it updates the directory of the PDS/PDSE to reflect the member you created or updated).
Close the file
If you also need to read members, you'd need to issue FIND (or BLDL/POINT - which can be fseek() in C) to point to the correct member, then read the member. I'm sure it sounds like a hassle, but the advantage of this approach is that you can allocate/open the file once, and process as many individual members as you like.
A second workaround might be to dynamically allocate the file yourself, and then open it using DD:ddname syntax...if you only infrequently access the file, this is probably easier to code. The gory details of dynamic allocation are fully described here: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.4.0/com.ibm.zos.v2r4.ieaa800/reqsvc.htm.
There are several ways to invoke dynamic allocation: you can write a small assembler program, you can use the z/OS UNIX Services BPXWDYN callable service, or you can use the C runtime "dynalloc()" or "svc99()" functions. The dynalloc() function is easy to use, but it only exposes a subset of what dynamic allocation can do...svc99() is more cumbersome to use, but it exposes more functionality.
However you do it, dynamic allocation takes "text units" that roughly correspond to the parameters you find on JCL DD statements. What you're describing sounds like you'd just need to pass DSN and DISP text units, and maybe DDNAME (you can either pass your own DDNAME, or let the system generate one for you).
The C runtime functions make all this easy, but be aware that there are a few oddities, such as the need to pad the parameters to their maximum length. For example, a DSN needs to be 44 characters and padded on the right with blanks - not a C-style null-terminated string.
Here's a small code snippet as an example:
#include <dynit.h>
. . .
int allocate(ddn, dsn, mem)
{
__dyn_t ip; // Parameters to dynalloc()
. . .
// Prepare the parameters to dynalloc()
dyninit(&ip); // Initialize the parameters
ip.__ddname = ddn; // 8-char blank-padded
ip.__dsname = dsn; // 44-char blank-padded
ip.__status = __DISP_SHR; // DISP=(SHR)
ip.__normdisp = __DISP_KEEP; // DISP=(...,KEEP)
ip.__misc_flags = __CLOSE; // FREE=CLOSE
if (*mem) // Optional PDS, PDS/E member
ip.__member = mem; // 8-char blank-padded
// Now we can call dynalloc()...
if (dynalloc(&ip)) // 0: Success, else error
{
// On error, the errcode/infocode explain why - values
// are detailed in z/OS Authorized Services Reference
printf("SVC99: Can't allocate %s - RC 0x%x, Info 0x%x\n",
dsn, ip.__errcode, ip.__infocode);
return FALSE;
}
// If dynalloc works, you can open the file with fopen("DD:ddname",...)
}
Don't forget that when you're done with the file, you generally need to deallocate it.
The code snippet above uses "FREE=CLOSE" - this means that when the file is closed, z/OS will automatically free the allocation...if you only open and process the dataset once, this is a convenient approach. If you need to repeatedly open and close the file, then you wouldn't use FREE=CLOSE, but instead call dynamic allocation a second time after you're done with your processing and want to free the file.
If you need to concurrently access multiple files, beware that you'll need to generate multiple unique DDNAMEs. You can either do this in your own code, or you can use the form of dynamic allocation that automatically builds and returns a usable DDNAME (of the form "SYSnnnnn").
Also, don't forget that updating a dataset under DISP=SHR can be dangerous in some situations, especially if the dataset involved can be a conventional PDS as well as a PDS/E. The big danger is that two applications open the dataset for output concurrently...both will write data into the same place, and the result will likely be a damaged PDS directory.
There are some other oddities in the UNIX Services environment, particularly if you use fork() or exec() and expect file handles to work in subprocesses since allocations are generally tied to a particular z/OS address space. Services like spawn() can let the child process run in the same address space, so this is one possibility.
I'm working with .mat files which are saved at the end of a program. The command is save foo.mat so everything is saved. I'm hoping to determine if the program changes by inspecting the .mat files. I see that from run to run, most of the .mat file is the same, but the field labeled __function_workspace__ changes somewhat.
(I am inspecting the .mat files via scipy.io.loadmat -- just loading the files and printing them out as plain text and then comparing the text. I found that save -ascii in Matlab doesn't put string labels on things, so going through Python is roundabout, but I get labels and that's useful.)
I am trying to determine from where these changes originate. Can anyone explain what __function_workspace__ contains? Why would it not be the same from one run of a given program to the next?
The variables I am really interested in are the same, but I worry that I might be overlooking some changes that might come back to bite me. Thanks in advance for any light you can shed on this problem.
EDIT: As I mentioned in a comment, the value of __function_workspace__ is an array of integers. I looked at the elements of the array and it appears that these numbers are ASCII or non-ASCII character codes. I see runs of characters which look like names of variables or functions, so that makes sense. But there are also some characters (non-ASCII) which don't seem to be part of a name, and there are a lot of null (zero) characters too. So aside from seeing names of things in __function_workspace__, I'm not sure what that stuff is exactly.
SECOND EDIT: I found that after commenting out calls to plotting functions, the content of __function_workspace__ is the same from one run of the program to the next, so that's great. At this point the only difference from one run to the next is that there is a __header__ field which contains a timestamp for the time at which the .mat file was created, which changes from run to run.
THIRD EDIT: I found an article, http://nbviewer.jupyter.org/gist/mbauman/9121961 "Parsing MAT files with class objects in them", about reverse-engineering the __function_workspace__ field. Thanks to Matt Bauman for this very enlightening article and thanks to #mpaskov for the pointer. It appears that __function_workspace__ is an undocumented catch-all for various stuff, only one part of which is actually a "function workspace".
1) Diffing .mat files
You may want to take a look at DiffPlug. It can do diffs of MAT files and I believe there is a command line interface for it as well.
2) Contents of function_workspace
SciPy's __function_workspace__ refers to a special variable at the end of a MAT file that contains extra data needed for reference types (e.g. table, string, handle, etc.) and various other stuff that is not covered by the official documentation. The name is misleading as it really refers to the "Subsystem" (briefly mentioned in the official spec as an offset in the header).
For example, if you save a reference type, e.g., emptyString = "", the resulting .mat will contain the following two entries:
(1) The variable itself. It looks sort of like a UInt32 matrix, but is actually an Opaque MCOS Reference (MATLAB Class Object System) to a string object at some location in the subsystem.
[0] Compressed (81 bytes, position = 128)
[0] Matrix (144 bytes, position = 0)
[0] UInt32[2] = [17, 0] // Opaque
[1] Int8[11] = ['emptyString'] // Variable Name
[2] Int8[4] = ['MCOS'] // Object Type
[3] Int8[6] = ['string'] // Class Name
[4] Matrix (72 bytes, position = 72)
[0] UInt32[2] = [13, 0] // UInt32
[1] Int32[2] = [6, 1] // Dimensions
[2] Int8[0] = [''] // Variable Name (not needed)
[3] UInt32[6] = [-587202560, 2, 1, 1, 1, 1] // Data (Reference Target)
(2) A UInt8 matrix without name (SciPy renamed this to __function_workspace__) at the end of the file. Aside from the missing name it looks like a standard matrix, but the data is actually another MAT file (with a reduced header) that contains the real data.
[1] Compressed (251 bytes, position = 217)
[0] Matrix (968 bytes, position = 0)
[0] UInt32[2] = [9, 0] // UInt8
[1] Int32[2] = [1, 920] // Dimensions
[2] Int8[0] = [''] // Variable Name
[3] ... 920 bytes ... // Data (Nested MAT File)
The format of the data is unfortunately completely undocumented and somewhat of a mess. I could post the contents of the Subsystem, but it gets somewhat overwhelming even for such a simple case. It's essentially a MAT file that contains a struct that contains a special variable (MCOS FileWrapper__) that contains a cell array with various values, including one that magically encodes various Object Properties.
Matt Bauman has done some great reverse engineering efforts (Parsing MAT files with class objects in them) that I believe all supporting implementations are based on. The MFL Java library contains a full (read-only) implementation of this (see McosFileWrapper.java).
Some updates on Matt Bauman's post that we found are:
The MCOS reference can refer to an array of handle objects and may have more than 6 values. It contains sizing information followed by an array of indices (see McosReference.java).
The Object Id field looks like a unique id, but the order seems random and sometimes doesn't match. I don't know what this value is, but completely ignoring it seems to work well :)
I've seen Segment 5 populated in .fig files, but I haven't been able to narrow down what's in there yet.
Edit: Fyi, once the string object is correctly parsed and all properties are filled in, the actual string value is encoded in yet another undocumented format (see testDoubleQuoteString)
I have a cell array. when I want create a cell array that its name is : '0691008752' in this case an error :"Invalid field name"
cellUsers.('0691008752') = ....
I know the reason for this error is that a number is called. But I do not know how I can set this name for the cell.
I agree with the comments above, prepending the field with a letter is the best solution for this problem..
One way to make this consistent is to use:
fname = matlab.lang.makeValidName('0691008752')
Its not widely known but you can have fields which begin with number - its bad practice and will almost certainly leads to bugs....
So how to do it, 1st you need to use mex, if you see the mathworks mex example and modify the appropriate line:
memcpy(fieldnames[0],"Doublestuff",sizeof("Doublestuff"));
to:
memcpy(fieldnames[0],"01234",sizeof("01234"));
After compiling and running you get:
Note: You can only access it through dynamic fields names. To update the field you must use mex.
MATLAB's methodsview tool is handy when exploring the API provided by external classes (Java, COM, etc.). Below is an example of how this function works:
myApp = actxserver('Excel.Application');
methodsview(myApp)
I want to keep the information in this window for future reference, by exporting it to a table, a cell array of strings, a .csv or another similar format, preferably without using external tools.
Some things I tried:
This window allows selecting one line at a time and doing "Ctrl+c Ctrl+v" on it, which results in a tab-separated text that looks like this:
Variant GetCustomListContents (handle, int32)
Such a strategy can work when there are only several methods, but not viable for (the usually-encountered) long lists.
I could not find a way to access the table data via the figure handle (w/o using external tools like findjobj or uiinspect), as findall(0,'Type','Figure') "do not see" the methodsview window/figure at all.
My MATLAB version is R2015a.
Fortunately, methodsview.m file is accessible and allows to get some insight on how the function works. Inside is the following comment:
%// Internal use only: option is optional and if present and equal to
%// 'noUI' this function returns methods information without displaying
%// the table. `
After some trial and error, I saw that the following works:
[titles,data] = methodsview(myApp,'noui');
... and returns two arrays of type java.lang.String[][].
From there I found a couple of ways to present the data in a meaningful way:
Table:
dataTable = cell2table(cell(data));
dataTable.Properties.VariableNames = matlab.lang.makeValidName(cell(titles));
Cell array:
dataCell = [cell(titles).'; cell(data)];
Important note: In the table case, the "Return Type" column title gets renamed to ReturnType, since table titles have to be valid MATLAB identifiers, as mentioned in the docs.
I am trying to create a function call graph for around 500 matlab src files. I am unable to find any tools which could help me do the same for multiple src files.
Is anyone familiar with any tools or plugins?
In case any such tools are not available, any suggestions on reading 6000 lines of matlab code
without documentation is welcome.
Let me suggest M2HTML, a tool to automatically generate HTML documentation of your MATLAB m-files. Among its feature list:
Finds dependencies between functions and generates a dependency graph (using the dot tool of GraphViz)
Automatic cross-referencing of functions and subfunctions with their definition in the source code
Check out this demo page to see an example of the output of this tool.
I recommend looking into using the depfun function to construct a call graph. See http://www.mathworks.com/help/techdoc/ref/depfun.html for more information.
In particular, I've found that calling depfun with the '-toponly' argument, then iterating over the results, is an excellent way to construct a call graph by hand. Unfortunately, I no longer have access to any of the code that I've written using this.
I take it you mean you want to see exactly how your code is running - what functions call what subfunctions, when, and how long those run for?
Take a look at the MATLAB Code Profiler. Execute your code as follows:
>> profile on -history; MyCode; profile viewer
>> p = profile('info');
p contains the function history, From that same help page I linked above:
The history data describes the sequence of functions entered and exited during execution. The profile command returns history data in the FunctionHistory field of the structure it returns. The history data is a 2-by-n array. The first row contains Boolean values, where 0 means entrance into a function and 1 means exit from a function. The second row identifies the function being entered or exited by its index in the FunctionTable field. This example [below] reads the history data and displays it in the MATLAB Command Window.
profile on -history
plot(magic(4));
p = profile('info');
for n = 1:size(p.FunctionHistory,2)
if p.FunctionHistory(1,n)==0
str = 'entering function: ';
else
str = 'exiting function: ';
end
disp([str p.FunctionTable(p.FunctionHistory(2,n)).FunctionName])
end
You don't necessarily need to display the entrance and exit calls like the above example; just looking at p.FunctionTable and p.FunctionHistory will suffice to show when code enters and exits functions.
There are already a lot of answers to this question.
However, because I liked the question, and I love to procrastinate, here is my take at answering this (It is close to the approach presented by Dang Khoa, but different enough to be posted, in my opinion):
The idea is to run the profile function, along with a digraph to represent the data.
profile on
Main % Code to be analized
p = profile('info');
Now p is a structure. In particular, it contains the field FunctionTable, which is a structure array, where each structure contains information about one of the calls during the execution of Main.m. To keep only the functions, we will have to check, for each element in FunctionTable, if it is a function, i.e. if p.FunctionTable(ii).Type is 'M-function'
In order to represent the information, let's use a MATLAB's digraph object:
N = numel(p.FunctionTable);
G = digraph;
G = addnode(G,N);
nlabels = {};
for ii = 1:N
Children = p.FunctionTable(ii).Children;
if ~isempty(Children)
for jj = 1:numel(Children)
G = addedge(G,ii,Children(jj).Index);
end
end
end
Count = 1;
for ii=1:N
if ~strcmp(p.FunctionTable(ii).Type,'M-function') % Keep only the functions
G = rmnode(G,Count);
else
Nchars = min(length(p.FunctionTable(ii).FunctionName),10);
nlabels{Count} = p.FunctionTable(ii).FunctionName(1:Nchars);
Count = Count + 1;
end
end
plot(G,'NodeLabel',nlabels,'layout','layered')
G is a directed graph, where node #i refers to the i-th element in the structure array p.FunctionTable where an edge connects node #i to node #j if the function represented by node #i is a parent to the one represented by node #j.
The plot is pretty ugly when applied to my big program but it might be nicer for smaller functions:
Zooming in on a subpart of the graph:
I agree with the m2html answer, I just wanted to say the following the example from the m2html/mdot documentation is good:
mdot('m2html.mat','m2html.dot');
!dot -Tps m2html.dot -o m2html.ps
!neato -Tps m2html.dot -o m2html.ps
But I had better luck with exporting to pdf:
mdot('m2html.mat','m2html.dot');
!dot -Tpdf m2html.dot -o m2html.pdf
Also, before you try the above commands you must issue something like the following:
m2html('mfiles','..\some\dir\with\code\','htmldir','doc_dir','graph','on')
I found the m2html very helpful (in combination with the Graphviz software). However, in my case I wanted to create documentation of a program included in a folder but ignoring some subfolders and .m files. I found that, by adding to the m2html call the "ignoreddir" flag, one can make the program ignore some subfolders. However, I didn't find an analogue flag for ignoring .m files (neither does the "ignoreddir" flag do the job). As a workaround, adding the following line after line 1306 in the m2html.m file allows for using the "ignoreddir" flag for ignoring .m files as well:
d = {d{~ismember(d,{ignoredDir{:}})}};
So, for instance, for generating html documentation of a program included in folder "program_folder" but ignoring "subfolder_1" subfolder and "test.m" file, one should execute something like this:
m2html( 'mfiles', 'program_folder', ... % set program folder
'save', 'on', ... % provide the m2html.mat
'htmldir', './doc', ... % set doc folder
'graph', 'on', ... % produce the graph.dot file to be used for the visualization, for example, as a flux/block diagram
'recursive', 'on', ... % consider also all the subfolders inside the program folders
'global', 'on', ... % link also calls between functions in different folders, i.e., do not link only the calls for the functions which are in the same folder
'ignoreddir', { 'subfolder_1' 'test.m' } ); % ignore the following folders/files
Please note that all subfolders with name "subfolder_1" and all files with name "test.m" inside the "program_folder" will be ignored.