I work on an Access VBA 2013 application, and this part of the project focuses on form controls and buttons.
I want to lock/unlock control modification thanks to buttons set aside.
For more genericity, I wanted the subs called by the events call homemade functions:
Sub LockAssociateControl(fctName As String, val As Boolean)
Dim code As String
code = "Forms!" & getModuleName & ".Controls!" & fctName & ".Locked = " & CStr(val)
Debug.Print (code) 'just to test
Eval code
End Sub
(getModuleName is also a homemade function that returns the right name of the module calling the function)
For example, this one is called like below:
Private Sub FirstName_Exit(Cancel As Integer)
Call LockAssociateControl("FirstName", True)
End Sub
and in "code" variable, it sets "Forms!Module1.Controls!Name.Locked = True" (with Module1 generated by getModuleName, and Name as the parameter (I haven't found better yet)).
Now I want this code evaluated in order to avoid me from hard coding every event sub.
My problem is that an error occurs at the Eval() line and says:
Error Code: 2770. The object to which you reffered in the Visual Basic procedure as an OLE object is not an OLE object
I've looked around StockOverflow and others forums to find out what's wrong with Eval(), and I found that this function's behavior is not made for what I want. So I tried to find another way to do that, as Evaluate function doesn't exist in Access, etc, but I found nothing really helpful.
I tried to use DoCmd or Call or various transformations on the parameter string of Eval(), but nothing has worked so far...
So here's my question: Do any of you have a solution for that problem? And if not, do you know an alternative not to have to write in every event function full statements like: Forms!Module1.Controls!Name.Locked = False?
You can do what you want easier without using Eval() ...
Sub LockAssociateControl(fctName As String, pVal As Boolean)
Forms(getModuleName).Controls(fctName).Locked = pVal
End Sub
Note I changed the argument name from val to pVal to avoid confusing it with the Val() function.
Related
Background -- I was reading up on accessing shadowed functions, and started playing with builtin . I wrote a little function:
function klear(x)
% go to parent environment...
evalin('base', builtin('clear','x')) ;
end
This throws the error:
Error using clear
Too many output arguments.
I think this happens because evalin demands an output from whatever it's being fed, but clear is one of the functions which has no return value.
So two questions: am I interpreting this correctly, and if so, is there an alternative function that allows me to execute a function in the parent environment (that doesn't require an output)?
Note: I'm fully aware of the arguments against trying to access shadowed funcs (or rather, to avoid naming functions in a way that overload base funcs, etc). This is primarily a question to help me learn what can and can't be done in MATLAB.
Note 2
My original goal was to write an overload function that would require an input argument, to avoid the malware-ish behavior of clear, which defaults to deleting everything. In Q&D pseudocode,
function clear(x)
if ~exist('x','var') return
execute_in_base_env(builtin(clear(x)))
end
There's a couple issues with your clear override:
It will always clear in the base workspace regardless of where it's called from.
It doesn't support multiple inputs, which is a common use case for clear.
Instead I'd have it check for whether it was called from the base workspace, and special-case that for your check for whether it's clearing everything. If some function is calling plain clear to clear all its variables, that's bad practice, but it's still how that function's logic works, and you don't want to break that. Otherwise it could error, or worse, return incorrect results.
So, something like this:
function clear(varargin)
stk = dbstack;
if numel(stk) == 1 && (nargin == 0 || ismember('all', varargin))
fprintf('clear: balking at clearing all vars in base workspace. Nothing cleared.\n');
return;
end
% Check for quoting problems
for i = 1:numel(varargin)
if any(varargin{i} == '''')
error('You have a quote in one of your args. That''s not valid.');
end
end
% Construct a clear() call that works with evalin()
arg_strs = strcat('''', varargin, '''');
arg_strs = [{'''clear'''} arg_strs];
expr = ['builtin(' strjoin(arg_strs, ', '), ')'];
% Do it
evalin('caller', expr);
end
I hope it goes without saying that this is an atrocious hack that I wouldn't recommend in practice. :)
What happens in your code:
evalin('base', builtin('clear','x'));
is that builtin is evaluated in the current context, and because it is used as an argument to evalin, it is expected to produce an output. It is exactly the same as:
ans = builtin('clear','x');
evalin('base',ans);
The error message you see occurs in the first of those two lines of code, not in the second. It is not because of evalin, which does support calling statements that don't produce an output argument.
evalin requires a string to evaluate. You need to build this string:
str = 'builtin(''clear'',''x'')';
evalin('base',ans);
(In MATLAB, the quote character is escaped by doubling it.)
You function thus would look like this:
function clear(var)
try
evalin('base',['builtin(''clear'',''',var,''')'])
catch
% ignore error
end
end
(Inserting a string into another string this way is rather awkward, one of the many reasons I don't like eval and friends).
It might be better to use evalin('caller',...) in this case, so that when you call the new clear from within a function, it deletes something in the function's workspace, not the base one. I think 'base' should only be used from within a GUI that is expected to control variables in the user's workspace, not from a function that could be called anywhere and is expected (by its name in this case) to do something local.
There are reasons why this might be genuinely useful, but in general you should try to avoid the use of clear just as much as the use of eval and friends. clear slows down program execution. It is much easier (both on the user and on the MATLAB JIT) to assign an empty array to a variable to remove its contents from memory (as suggested by rahnema1 in a comment. Your base workspace would not be cluttered with variables if you used function more: write functions, not scripts!
I am trying to create an automatic 'debugger' for tracing function flow. Because I'm not a god, I do make mistakes, and when I do, I normally end up throwing a bunch of "show" in my functions. What I'm looking to do is create a function that will insert shows prior to each line for each variable used in an expression on that line and any variable assigned to in the previous.
Imagine I have a function f that is throwing an unhelpful error. I would insert
f: debugwrap[f];
after the function definition to insert the appropriate debugging within the lines of function string, parse, and return the augmented function.
I have had success handling the params and simple functions, but where I have trouble is where semicolons do not indicate eol, such as in function calls. Using parse on the function body, I can easily break out all the lines and find the required variables, but once I do that, I need to 'unparse' each line in the function. That unparsing is giving me trouble, especially where functions are translated to what I believe is k - such as "*:".
Simple example with only initial logging:
q)f: {[a;b] a: a xexp b; c: a-first `int$-1#string first table[`symbols]; :c }
q)df: dp[f;";"]
q)df
"{[a;b] show "a is ",string[a]; show "b is ",string[b]; a : a xexp b;c : a - *:`int$-1#$:*:table`symbols;: c;}"
q)parse df
ERROR: *:
What I'm doing now is recursively walking through the parse tree and reconstructing the call. That is painful and not yet yielding results. What I think is the best way is to get the information I need out of each parse subtree, then unparse that subtree and append it to my function string.
Appreciate any help you all can offer.
The best place to see how debugging might be done, is with this code: http://code.kx.com/q/ref/debug/
Suppose that I have some function Foo that uses two internal helper functions bar and baz.
Is there a way to organize the code so that bar and baz remain "out of sight", but at the same time can be unit-tested? (Preferably, the unit tests for bar and baz would be in the same suite as the unit tests for the main function Foo.)
There are a few options to achieve this.
First, does Foo need to be a function? If it is a class then you can implement bar and baz as Hidden and Access='protected' which is pretty tightly locked down. Then you can create a test specific subclass that accesses bar and baz for testing. You can also scope the access to the test to further lock it from others view if desired.
If you do want Foo to be a function then you still have options. One of those is to somehow get a function handle to the private local functions to your tests. You can do this through some special calling syntax of Foo that returns these functions to a test when called. This however, modifies the production code in a potentially confusing way, and it essentially inserts test logic into production. I would prefer hiding these functions by putting them into a package so that they are not in the global namespace. The name of the package can indicate that they are off limits and not part of your supported interface.
Finally, one option is to simply use the public interface in order to test these functions. Clearly they are called from the function in some scenario. You may want to consider writing a test through the front door of your interface. One benefit of this is that you would then be able to change the implementation of your local function structure easily without modifying your test. The private functions are private because by definition they are part of your implementation, not your interface. If you find it complex enough to require its own test independent from the interface of Foo then it should probably just be broken out into another package function or class as described above in order to unit test it.
Not the most elegant way to do it but then Matlab is not the most practical language to build modules in a single code file.
One way to do it is to have a higher level level function which transfer the input arguments to the function you choose:
function out = fooModule( FunctionCalled , varargin)
%// This main function only transfer the input argument to the function chosen
switch FunctionCalled
case 'main'
out = foo(varargin) ;
case 'bar'
out = mBar(varargin) ;
case 'baz'
out = mBaz(varargin) ;
end
end
function outFoo = foo(varargin)
%// your main FOO code
%// which can call the helper function if necessary
end
function outbar = mBar(varargin)
%// your code
end
function outbaz = mBaz(varargin)
%// your code
end
You can even embed the FunctionCalled parameter into varargin if you want to compact things. This would also allow to test the type of the first argument. So for example if the first argument is not a string (calling one of the helper function) then forward the execution directly to the main foo function without having to call it explicitly (so the helper functions remain 'out of sight' if not called explicitly).
Otherwise, a completely different approach, you could consider writing a class.
I am attempting to use LuaJ with Scala. Most things work (actually all things work if you do them correctly!) but the simple task of setting object values has become incredibly complicated thanks to Scala's setter implementation.
Scala:
class TestObject {
var x: Int = 0
}
Lua:
function myTestFunction(testObject)
testObject.x = 3
end
If I execute the script or line containing this Lua function and pass a coerced instance of TestObject to myTestFunction this causes an error in LuaJ. LuaJ is trying to direct-write the value, and Scala requires you to go through the implicitly-defined setter (with the horrible name x_=, which is not valid Lua so even attempting to call that as a function makes your Lua not parse).
As I said, there are workarounds for this, such as defining your own setter or using the #BeanProperty markup. They just make code that should be easy to write much more complicated:
Lua:
function myTestFunction(testObject)
testObject.setX(testObject, 3)
end
Does anybody know of a way to get luaj to implicitly call the setter for such assignments? Or where I might look in the luaj source code to perhaps implement such a thing?
Thanks!
I must admit that I'm not too familiar with LuaJ, but the first thing that comes to my mind regarding your issue is to wrap the objects within proxy tables to ease interaction with the API. Depending upon what sort of needs you have, this solution may or may not be the best, but it could be a good temporary fix.
local mt = {}
function mt:__index(k)
return self.o[k] -- Define how your getters work here.
end
function mt:__newindex(k, v)
return self.o[k .. '_='](v) -- "object.k_=(v)"
end
local function proxy(o)
return setmetatable({o = o}, mt)
end
-- ...
function myTestFunction(testObject)
testObject = proxy(testObject)
testObject.x = 3
end
I believe this may be the least invasive way to solve your problem. As for modifying LuaJ's source code to better suit your needs, I had a quick look through the documentation and source code and found this, this, and this. My best guess says that line 71 of JavaInstance.java is where you'll find what you need to change, if Scala requires a different way of setting values.
f.set(m_instance, CoerceLuaToJava.coerce(value, f.getType()));
Perhaps you should use the method syntax:
testObject:setX(3)
Note the colon ':' instead of the dot '.' which can be hard to distinguish in some editors.
This has the same effect as the function call:
testObject.setX(testObject, 3)
but is more readable.
It can also be used to call static methods on classes:
luajava.bindClass("java.net.InetAddress"):getLocalHost():getHostName()
The part to the left of the ':' is evaluated once, so a statement such as
x = abc[d+e+f]:foo()
will be evaluated as if it were
local tmp = abc[d+e+f]
x = tmp.foo(tmp)
As noted here, functions in packages, as well as static methods in classes, still need to use a packagename.functionname syntax or import packagename.* for each function (since the imports are part of the function workspace and not global). This means that changing the package/class name later on can become a tedious nuisance.
Is there any way to do something like import this.*, i.e. a package/class name agnostic method to access all functions/static methods in the same package/class?
So... doesn't this require importthis to also be imported? Or is importthis a function you always have in your path?
It seems hardly more complex to just paste an "import this" block with this at the top of each function, and then you don't have to worry about importthis being in your path. I tend to feel that reliance on path is dangerous.
"Import This" block
%% Import own package
[~, pkgdir] = fileparts(fileparts(mfilename('fullpath')));
import([pkgdir(2:end) '.*']);
You can even put it in a try/catch block to make sure it's in a package directory, and decide what to do if it's not.
%% Import own package
try
[~, pkgdir] = fileparts(fileparts(mfilename('fullpath')));
import([pkgdir(2:end)'.*']);
catch err
if ~strcmp(err.identifier,'MATLAB:UndefinedFunction'), rethrow(err); end
end
I recently ran into a similar problem and found the following solution for packages. However it is VERY hacky.
You create a function called import this with an optional argument.
function to_eval = importthis(exclude_list)
if nargin == 0
exclude_list = [];
end
var_name = genvarname('A', exclude_list); %avoid shadowing
to_eval = ['[~,'...
, var_name...
, ']=fileparts(fileparts(mfilename(''fullpath'')));'... %get containing dir
, 'eval([''import '','...
, var_name...
, '(2:end),''.*'']);'... %remove '+'
, 'clear '... %clean up
, var_name
];
end
This function returns a string which can then be evaled that imports the "this" package. So in your package functions you would put the following near the top:
function B = myfunc(A)
eval(importthis);
%function body
end
You can also pass who to importhis, leaving your function's namespace clean.
function B = myfunc(A)
eval(importthis(who));
%function body
end
I can't decide whether I should feel proud or discusted by what I did.
This probably is not a bounty worthy answer but as you do not have any answers I thought I would post it anyway! You can invoke static methods via an instance of the class which you would only need to define once. You can invoke functions via a function handle but this would require one handle per function.
Using these techniques you could define all your static method and function references in one place. Then you would use these references throughout your package. Then if you decided to change the package name at a later point you would only need to update these references which are all stored in one place.
See:
Calling Static Methods
You can also invoke static methods using an instance of the class,
like any method:
obj = MyClass;
value = obj.pi(.001);
function_handle (#)
The following example creates a function handle for the humps function
and assigns it to the variable fhandle.
fhandle = #humps;