I have a class with a simple user interface and want to write unit tests for all public member functions.
One of my buttons issues a warning via dialog. I use the uiconfirm function and assign the result to a variable to block the function until the user confirms the dialog.
classdef UI
properties
fig matlab.ui.Figure
button matlab.ui.control.Button
end
methods
function obj = UI()
obj.fig = uifigure();
obj.button = uibutton(obj.fig);
obj.button.Text = "click me";
obj.button.ButtonPushedFcn = #(~, ~) obj.click();
end
end
methods
function click(obj)
[~] = uiconfirm(obj.fig, "Something failed.", "Warning", ...
"Options", {'OK'}, "Icon", "warning");
end
end
end
I use class based unit tests:
https://de.mathworks.com/help/matlab/class-based-unit-tests.html
How can I test the click function?
I'm not quite sure if you're familiar with Matlab App Testing Framework. The App Testing Framework allows you to programmatically interact with appdesigner/uifigure apps. Take a look at the component-gesture availability matrix to help accelerate your UI Testing needs. Having said that, as of today, app testing framework doesn't yet support directly interacting with blocking UI Dialog's like uiconfirm.
An obvious brute force way of solving the blocking problem is to shadow the uiconfirm function in test with your custom version that could be non-blocking. If this is an easy enough acceptable solution for you, go forward with it. However, as your app changes and scales up, the mocks might need to get complicated too and thus become hard to maintain.
With that in mind, A better approach to test your app programmatically is to use a mocking framework to create a mock object to define the behavior of uiconfirm. The best way to achieve this is via Dependency Injection. In your case, the App could take/have a property that could store a context aware "UIConfirm [name it according to your workflow]" object . By default in a production environment, it would call the real uiconfirm command, but a “mock” or “stub” delegator could supply deterministic outputs to make the system more testable (and avoid the “blocking” dialog issue altogether) It’s certainly added overhead to do this in an otherwise simple App, but I get the sense that you value testing as much as we do!
Also please take a look at this detailed Mocking-App Testing example https://www.mathworks.com/help/matlab/matlab_prog/write-test-that-uses-app-testing-and-mocking-frameworks.html
Related
I would like to know, how to specify the function of redirection in the ask function.
Like :
Launch main function to choice the actions.
Start the function chosen by the user.
Loop in this function as long as the user does not say "stop" for example.
Maybe with a specific intent in the ask function, I don't know ...
Does anyone have the solution ?
Your request is what contexts are used for in Dialogflow. You can set it up so that certain intents are only available to be triggered if a certain input context exists. These contexts originate from an output context of an intent.
Using the dialog state is not recommended. If you want to store generic data, you should use app.data in v1 or conv.data in v2 of the AoG client library. This data object persists throughout a session, which is more powerful than dialog state.
You can't. The ask() method is "completed" in the text intent. It is a shame that you can't - the code would be so much less cluttered if you could.
IAC, you can pass "dialog state" to ask() and then getDialogState() in the text intent and use that to restore your application's context and continue from there.
I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..
I am new to Lift and I am thinking whether I should investigate it more closely and start using it as my main platform for the web development. However I have few "fears" which I would be happy to be dispelled first.
Security
Assume that I have the following snippet that generates a form. There are several fields and the user is allowed to edit just some of them.
def form(in : NodeSeq): NodeSeq = {
val data = Data.get(...)
<lift:children>
Element 1: { textIf(data.el1, data.el1(_), isEditable("el1")) }<br />
Element 2: { textIf(data.el2, data.el2(_), isEditable("el2")) }<br />
Element 3: { textIf(data.el3, data.el3(_), isEditable("el3")) }<br />
{ button("Save", () => data.save) }
</lift:children>
}
def textIf(label: String, handler: String => Any, editable: Boolean): NodeSeq =
if (editable) text(label, handler) else Text(label)
Am I right that there is no vulnerability that would allow a user to change a value of some field even though the isEditable method assigned to that field evaluates to false?
Performance
What is the best approach to form processing in Lift? I really like the way of defining anonymous functions as handlers for every field - however how does it scale? I guess that for every handler a function is added to the session with its closure and it stays there until the form is posted back. Doesn't it introduce some potential performance issue when it comes to a service under high loads (let's say 200 requests per second)? And when do these handlers get freed (if the form isn't resubmitted and the user either closes the browser or navigate to another page)?
Thank you!
With regards to security, you are correct. When an input is created, a handler function is generated and stored server-side using a GUID identifier. The function is session specific, and closed over by your code - so it is not accessible by other users and would be hard to replay. In the case of your example, since no input is ever displayed - no function is ever generated, and therefore it would not be possible to change the value if isEditable is false.
As for performance, on a single machine, Lift performs incredibly well. It does however require session-aware load balancing to scale horizontally, since the handler functions do not easily serialize across machines. One thing to remember is that Lift is incredibly flexible, and you can also create stateless form processing if you need to (albeit, it will not be as secure). I have never seen too much of a memory hit with the applications we have created and deployed. I don't have too many hard stats available, but in this thread, David Pollak mentioned that demo.liftweb.net at the time had 214 open sessions consuming about 100MB of ram (500K/session).
Also, here is a link to the Lift book's chapter on Scalability, which also has some more info on security.
The closure and all the stuff is surely cleaned at sessionShutdown. Earlier -- I don't know. Anyway, it's not really a theoretical question -- it highly depends on how users use web forms in practice. So, for a broader answer, I'd ask the question on the main channel of liftweb -- https://groups.google.com/forum/#!forum/liftweb
Also, you can use a "statical" form if you want to. But AFAIK there are no problems with memory and everybody is using the main approach to forms.
If you don't create the handler xml/html -- the user won't be able to change the data, that's for sure. In your code, if I understood it correctly (I'm not sure), you don't create "text(label,handler)" when it's not needed, so everything's secure.
I run into this regularly, and am just looking for best practice/approach. I have a database / datamodule-containing app, and want to fire up the database/datasets on startup w/o having "active at runtime" set to true at design time (database location varies). Also run a web "check for updates" routine when the app starts up.
Given TForm event sequences, and results from various trial and error, I'm currently using this approach:
I use a "Globals" record set up in the main form to store all global vars, have one element of that called Globals.AppInitialized (boolean), and set it to False in the Initialization section of the main form.
At the main form's OnShow event (all forms are created by then), I test Globals.AppInitialized; if it's false, I run my "Initialization" stuff, and then finish by setting Globals.AppInitialized := True.
This seems to work pretty well, but is it the best approach? Looking for insight from others' experience, ideas and opinions. TIA..
I generally always turn off auto creation of all forms EXCEPT for the main form and possibly the primary datamodule.
One trick that I learned you can do, is add your datamodule to your project, allow it to auto-create and create BEFORE your main form. Then, when your main form is created, the onCreate for the datamodule will have already been run.
If your application has some code to say, set the focus of a control (something you can't do on creation, since its "not visible yet") then create a user message and post it to the form in your oncreate. The message SHOULD (no guarantee) be processed as soon as the forms message loop is processed. For example:
const
wm_AppStarted = wm_User + 101;
type
Form1 = class(tForm)
:
procedure wmAppStarted(var Msg:tMessage); message wm_AppStarted;
end;
// in your oncreate event add the following, which should result in your wmAppStarted event firing.
PostMessage(handle,wm_AppStarted,0,0);
I can't think of a single time that this message was never processed, but the nature of the call is that it is added to the message queue, and if the queue is full then it is "dropped". Just be aware that edge case exists.
You may want to directly interfere with the project source (.dpr file) after the form creation calls and before the Application.Run. (Or even earlier in case.)
This is how I usually handle such initialization stuff:
...
Application.CreateForm(TMainForm, MainForm);
...
MainForm.ApplicationLoaded; // loads options, etc..
Application.Run;
...
I don't know if this is helpful, but some of my applications don't have any form auto created, i.e. they have no mainform in the IDE.
The first form created with the Application object as its owner will automatically become the mainform. Thus I only autocreate one datamodule as a loader and let this one decide which datamodules to create when and which forms to create in what order. This datamodule has a StartUp and ShutDown method, which are called as "brackets" around Application.Run in the dpr. The ShutDown method gives a little more control over the shutdown process.
This can be useful when you have designed different "mainforms" for different use cases of your application or you can use some configuration files to select different mainforms.
There actually isn't such a concept as a "global variable" in Delphi. All variables are scoped to the unit they are in and other units that use that unit.
Just make the AppInitialized and Initialization stuff as part of your data module. Basically have one class (or datamodule) to rule all your non-UI stuff (kind of like the One-Ring, except not all evil and such.)
Alternatively you can:
Call it from your splash screen.
Do it during log in
Run the "check for update" in a background thread - don't force them to update right now. Do it kind of like Firefox does.
I'm not sure I understand why you need the global variables? Nowadays I write ALL my Delphi apps without a single global variable. Even when I did use them, I never had more than a couple per application.
So maybe you need to first think why you actually need them.
I use a primary Data Module to check if the DB connection is OK and if it doesn't, show a custom component form to setup the db connection and then loads the main form:
Application.CreateForm(TDmMain, DmMain);
if DmMain.isDBConnected then
begin
Application.CreateForm(TDmVisualUtils, DmVisualUtils);
Application.CreateForm(TfrmMain, frmMain);
end;
Application.Run;
One trick I use is to place a TTimer on the main form, set the time to something like 300ms, and perform any initialization (db login, network file copies, etc). Starting the application brings up the main form immediately and allows any initialization 'stuff' to happen. Users don't startup multiple instances thinking "Oh..I didn't dbl-click...I'll do it again.."
I am trying to write unit tests for a bit of code involving Events. Since I need to raise an event at will, I've decided to rely upon RhinoMocks to do so for me, and then make sure that the results of the events being raised are as expected (when they click a button, values should change in a predictable manner, in this example, the height of the object should decrease)
So, I do a bit of research and realize I need an Event Raiser for the event in question. Then it's as simple as calling eventraiser.Raise(); and we're good.
The code for obtaining an event raiser I've written as is follows (written in C#) (more or less copied straight off the net)
using (mocks.Record())
{
MyControl testing = mocks.DynamicMock<MyControl>();
testing.Controls.Find("MainLabel",false)[0].Click += null;
LastCall.IgnoreArguments();
LastCall.Constraints(Rhino.Mocks.Constraints.Is.NotNull());
Raiser1 = LastCall.GetEventRaiser();
}
I then test it as In playback mode.
using (mocks.Playback())
{
MyControl thingy = new MyControl();
int temp=thingy.Size.Height;
Raiser1.Raise();
Assert.Greater(temp, thingy.Size.Height);
}
The problem is, when I run these tests through NUnit, it fails. It throws an exception at the line testing.Controls.Find("MainLabel",false)[0].Click += null; which complains about trying to add null to the event listener. Specifically, "System.NullReferenceException: Object Reference not set to an instance of the Object"
Now, I was under the understanding that any code under the Mocks.Record heading wouldn't actually be called, it would instead create expectations for code calls in the playback. However, this is the second instance where I've had a problem like this (the first problem involved classes/cases that where a lot more complicated) Where it appears in NUnit that the code is actually being called normally instead of creating expectations. I am curious if anyone can point out what I am doing wrong. Or an alternative way to solve the core issue.
I'm not sure, but you might get that behaviour if you haven't made the event virtual in MyControl. If methods, events, or properties aren't virtual, then I don't think DynamicMock can replace their behaviour with recording and playback versions.
Personally, I like to define interfaces for the classes I'm going to mock out and then mock the interface. That way, I'm sure to avoid this kind of problem.