Using variables defined in R environment within knitr - knitr

I'd like to use knitr to produce a document which includes numbers, tables, and figures defined separately in R. I have a separate .R script within which I define the variables of interest, and I have executed this script and verified the variables are in memory. Then, within a .Rmd file I have the R markup code, within which I attempt to display the variables I've defined in R. I keep getting error messages whenever I attempt to knit, stating something like:
Error in unique(c("AsIs", oldClass(x))) : object "lamdaQ6" not found...
Clearly the knitr process initiates a new environment which excludes already defined variables. I have rather extensive R code to define the variables I want to include in the document, which I want to keep separate from the R markup code (both for clarity, and because R markup is not a good development environment for R).
Is there some means of preserving awareness of existing variables in R memory within knitr? I have searched extensively and not found a solution, probably because I don't know the correct term.

The solution offered by Paul Roub worked -- I thought it didn't, but this turned-out to be due to my own typo.
A cleaner solution for my needs, rather than source the whole R file, was to use "save" to save just the objects I needed to create the document. I added a line to save the objects at the end of the .R file, then used "load" to load the objects at the top of the .Rmd script.
So, at the end of the .R file, to save objects XX and YY, I have:
save(XX,YY,file="Data4Rmd.RData")
Then, near the beginning of the .Rmd file, to import these objects, I have:
```
{r,echo=FALSE}
setwd("c:\CurrentProjectDirectory")
load(file="Data4Rmd.RData")
```
This allows me to later use statements like:
Decay rate is lambda = `r XX`, and mean lifetime is `r YY`.
Yet another alternative might have been to save and load the whole workspace, but this seemed excessive.
Thanks for the assistance Paul, much appreciated :)

Related

Stata and global variables

I am working with Stata.
I have a variable called graduate_secondary.
I generate a global variable called outcome, because eventually I will use another outcome.
Now I want to replace the variable graduate if a condition relative to global is met, but I get an error:
My code is:
global outcome "graduate_secondary"
gen graduate=.
replace graduate=1 if graduate_primary==1 & `outcome'==1
But i receive the symbol ==1 invalid name.
Does anyone know why?
Something along those lines might work (using a reproducible example):
sysuse auto, clear
global outcome "rep78"
gen graduate=.
replace graduate=1 if mpg==22 & $outcome==3
(2 real changes made)
In your example, just use
replace graduate=1 if graduate_primary==1 & $outcome==1
would work.
Another solution is to replace global outcome "graduate_secondary" with local outcome "graduate_secondary".
Stata has two types of macros: global, which are accessed with a $, and local, which are accessed with single quotes `' around the name -- as you did in your original code.
You get an error message because a local by the name of outcome has no value assigned to it in your workspace. By design, this will not itself produce an error but instead will the reference to the macro will evaluate as a blank value. You can see the result of evaluating macro references when you type them by using display as follows. You can also see all of the macros in your workspace with macro dir (the locals start with an underscore):
display `outcome'
display $outcome
Here is a blog post about using macros in Stata. In general, I only use global macros when I have to pass something between multiple routines, but this seems like a good use case for locals.

How to automatically convert a Matlab script into a Matlab function?

My problem is the following:
I have very many (~1000) mutually calling Matlab scripts, which are very poorly written, regularly damage each other's environments and generally became unmanageable.
One of the reasons I even got this problem is that I need to write a testsuite covering a big part of them. Luckily, for most of them the main criterion of 'correctness' is 'they don't crash'.
Just running them one by one in a loop is generally not an option, because they regularly call clear classes, close all, clc, shadow built-in functions and operators, et cetera.
So my original aim was to find a way to run a matlab script in sort of an 'isolated environment', but I didn't find a good way to do it. (Suggestions welcome, but it is not the main question.)
Since I will need to convert them all to functions anyway, I am looking for some way to do it auto-magically, or at least semi-automatically.
What I can mean semi-automatically:
Just add a line function varargout = $filename( varargin ) as the first line of the file, and end as the last one. This will at least make them runnable as functions with feval and all such functions and (more importantly) prevent them from damaging the test-runner.
Do point 1 and scan the file for referencing undeclared variables and add them as function arguments. This should be also doable, since the names of the variables are known. This will not help identifying output variables, but will still be a lot of help. For example, we could pack the whole workspace into one big output structure.
Do a runtime version of point 2. This way the 'magical converter' can actually track execution environments (workspaces) and identify which variables are implicitly used as 'input arguments' of a script, and which would be later used 'output arguments'. This option looks EXPHARD, but for a small number of calls should be not too bad in practice.
Point 1 I can implement myself using sed, as I also can get rid of all clear classes and clc, but the options 2 and 3 seem much harder. Is there anything at least remotely resembling options 2 or 3?

Can the MATLAB editor show the file from which text is displayed? [duplicate]

In MATLAB, how do you tell where in the code a variable is getting output?
I have about 10K lines of MATLAB code with about 4 people working on it. Somewhere, someone has dumped a variable in a MATLAB script in the typical way:
foo
Unfortunately, I do not know what variable is getting output. And the output is cluttering out other more important outputs.
Any ideas?
p.s. Anyone ever try overwriting Standard.out? Since MATLAB and Java integration is so tight, would that work? A trick I've used in Java when faced with this problem is to replace Standard.out with my own version.
Ooh, I hate this too. I wish Matlab had a "dbstop if display" to stop on exactly this.
The mlint traversal from weiyin is a good idea. Mlint can't see dynamic code, though, such as arguments to eval() or string-valued figure handle callbacks. I've run in to output like this in callbacks like this, where update_table() returns something in some conditions.
uicontrol('Style','pushbutton', 'Callback','update_table')
You can "duck-punch" a method in to built-in types to give you a hook for dbstop. In a directory on your Matlab path, create a new directory named "#double", and make a #double/display.m file like this.
function display(varargin)
builtin('display', varargin{:});
Then you can do
dbstop in double/display at 2
and run your code. Now you'll be dropped in to the debugger whenever display is implicitly called by the omitted semicolon, including from dynamic code. Doing it for #double seems to cover char and cells as well. If it's a different type being displayed, you may have to experiment.
You could probably override the built-in disp() the same way. I think this would be analagous to a custom replacement for Java's System.out stream.
Needless to say, adding methods to built-in types is nonstandard, unsupported, very error-prone, and something to be very wary of outside a debugging session.
This is a typical pattern that mLint will help you find:
So, look on the right hand side of the editor for the orange lines. This will help you find not only this optimization, but many, many more. Notice also that your variable name is highlighted.
If you have a line such as:
foo = 2
and there is no ";" on the end, then the output will be dumped to the screen with the variable name appearing first:
foo =
2
In this case, you should search the file for the string "foo =" and find the line missing a ";".
If you are seeing output with no variable name appearing, then the output is probably being dumped to the screen using either the DISP or FPRINTF function. Searching the file for "disp" or "fprintf" should help you find where the data is being displayed.
If you are seeing output with the variable name "ans" appearing, this is a case when a computation is being done, not being put in a variable, and is missing a ';' at the end of the line, such as:
size(foo)
In general, this is a bad practice for displaying what's going on in the code, since (as you have found out) it can be hard to find where these have been placed in a large piece of code. In this case, the easiest way to find the offending line is to use MLINT, as other answers have suggested.
I like the idea of "dbstop if display", however this is not a dbstop option that i know of.
If all else fails, there is still hope. Mlint is a good idea, but if there are many thousands of lines and many functions, then you may never find the offender. Worse, if this code has been sloppily written, there will be zillions of mlint flags that appear. How will you narrow it down?
A solution is to display your way there. I would overload the display function. Only temporarily, but this will work. If the output is being dumped to the command line as
ans =
stuff
or as
foo =
stuff
Then it has been written out with display. If it is coming out as just
stuff
then disp is the culprit. Why does it matter? Overload the offender. Create a new directory in some directory that is on top of your MATLAB search path, called #double (assuming that the output is a double variable. If it is character, then you will need an #char directory.) Do NOT put the #double directory itself on the MATLAB search path, just put it in some directory that is on your path.
Inside this directory, put a new m-file called disp.m or display.m, depending upon your determination of what has done the command line output. The contents of the m-file will be a call to the function builtin, which will allow you to then call the builtin version of disp or display on the input.
Now, set a debugging point inside the new function. Every time output is generated to the screen, this function will be called. If there are multiple events, you may need to use the debugger to allow processing to proceed until the offender has been trapped. Eventually, this process will trap the offensive line. Remember, you are in the debugger! Use the debugger to determine which function called disp, and where. You can step out of disp or display, or just look at the contents of dbstack to see what has happened.
When all is done and the problem repaired, delete this extra directory, and the disp/display function you put in it.
You could run mlint as a function and interpret the results.
>> I = mlint('filename','-struct');
>> isErrorMessage = arrayfun(#(S)strcmp(S.message,...
'Terminate statement with semicolon to suppress output (in functions).'),I);
>>I(isErrorMessage ).line
This will only find missing semicolons in that single file. So this would have to be run on a list of files (functions) that are called from some main function.
If you wanted to find calls to disp() or fprintf() you would need to read in the text of the file and use regular expresions to find the calls.
Note: If you are using a script instead of a function you will need to change the above message to read: 'Terminate statement with semicolon to suppress output (in scripts).'
Andrew Janke's overloading is a very useful tip
the only other thing is instead of using dbstop I find the following works better, for the simple reason that putting a stop in display.m will cause execution to pause, every time display.m is called, even if nothing is written.
This way, the stop will only be triggered when display is called to write a non null string, and you won't have to step through a potentially very large number of useless display calls
function display(varargin)
builtin('display', varargin{:});
if isempty(varargin{1})==0
keyboard
end
A foolproof way of locating such things is to iteratively step through the code in the debugger observing the output. This would proceed as follows:
Add a break point at the first line of the highest level script/function which produces the undesired output. Run the function/script.
step over the lines (not stepping in) until you see the undesired output.
When you find the line/function which produces the output, either fix it, if it's in this file, or open the subfunction/script which is producing the output. Remove the break point from the higher level function, and put a break point in the first line of the lower-level function. Repeat from step 1 until the line producing the output is located.
Although a pain, you will find the line relatively quickly this way unless you have huge functions/scripts, which is bad practice anyway. If the scripts are like this you could use a sort of partitioning approach to locate the line in the function in a similar manner. This would involve putting a break point at the start, then one half way though and noting which half of the function produces the output, then halving again and so on until the line is located.
I had this problem with much smaller code and it's a bugger, so even though the OP found their solution, I'll post a small cheat I learned.
1) In the Matlab command prompt, turn on 'more'.
more on
2) Resize the prompt-y/terminal-y part of the window to a mere line of text in height.
3) Run the code. It will stop wherever it needed to print, as there isn't the space to print it ( more is blocking on a [space] or [down] press ).
4) Press [ctrl]-[C] to kill your program at the spot where it couldn't print.
5) Return your prompt-y area to normal size. Starting at the top of trace, click on the clickable bits in the red text. These are your potential culprits. (Of course, you may need to have pressed [down], etc, to pass parts where the code was actually intended to print things.)
You'll need to traverse all your m-files (probably using a recursive function, or unix('find -type f -iname *.m') ). Call mlint on each filename:
r = mlint(filename);
r will be a (possibly empty) structure with a message field. Look for the message that starts with "Terminate statement with semicolon to suppress output".

At which lines in my MATLAB code a variable is accessed?

I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)

In Powershell, is it a good practice to write multiple functions in a single script file?

I was told to write multiple actions using powershell script. Actions such as Apppool creation, SQL updation, File editing and etc.
I am going to write such a bulk thing in script first time.
So i would like to know the best practice before writting them.
Is it a good practice to write all the function in a single file?
I am thinking at least 10 functions i may need to write. Assuming each function may have 10 lines of code.
Consider modules: the simplest format is a manifest (.psd1) and a single script file (.psm1) containing all the functions, aliases, ... the module exports (plus any internal helpers).
In this case you are clearly putting multiple connected functions in one file. Even if much of the code is only dot-sourced into the script module they are still logically in one entity.
On the other hand using scripts in your path to execute without having to load before hand would tend (as per Adriano's comment to the question) to support one function (at script scope rather than a function statement) makes sense.
Therefore: there is no one "good practice": it all depends on the details of the circumstance.
Be pragmatic, truth come from action, no from words ;O)
So begin, by the beginining :
1) Does the thing you want to do exists somewhere on internet EX PoshCode (if so you can adapt it)
2) Think about your functions (not to much) object : reuse the code (write your algorith in pseudo code)
3) Use internet to look for the functions even existing
4) Wrote all functions in the same file as the main code to test them. During this phase you'll discover new functions and parameters to add or to remove from existing ones
5) Once you have tested your code, put the reusable functions (and the ones they depend on) into one or multiple module.
My solutions will be to create a custom Module where will be possible add function later.
You can save your single file with all functions as mymodule.psm1 in mymodule folder under this path $env:psmodulepath.
then add-module mymodule (or better call it in you $profile to have it ready when console is up)