I am trying to deploy a small change to the trigger and I am getting warnings about insufficient (0%) Unit test coverage for another trigger (setTitle as shown below)
There is a test in place for this (please see below) but for some reason it is not getting taken into account. This test is defined similar to other tests which run successfully but in this case the trigger is not getting invoked (leading to the warnings for insufficient coverage)
Any ideas or suggestions where I can look and if there is any way to get past the test?
Trigger Test:
Call_Report__c c = new Call_Report__c(name='test cr', opportunity__c=o.id);
insert c;
Trigger declaration:
trigger setTitle on Call_Report__c (before insert)
Thank you!
I think the best way would be to try running the unit test manually in your target org and examine the debug log. And also manually check from UI if functionality still behaves as expected.
Some tips:
(applicable only when deploying to sandboxes) As stupid as it sounds are you sure the trigger is active? There's a checkbox when you edit them from UI or a status field in the accompanying metadata xml
Similar checkbox - is the trigger valid? If it was calling a method from class that was modified in the meantime you'll have a problem.
Do you have any recently introduced validations on the Call_Report__c or any prerequisites used in the test (like Opportunity):
fields marked as required in field definition,
shortened text field size but you're passing a too long string
Validation Rules (not on the Call_Report__c because these are checked later), but on Opportunity etc.
Can you add some system.debug() to the test to make sure that Opportunity you're using is created ok. Also - sometimes developers are too much VF-centered and don't throw exceptions but swallow them and put VF error messages so check ApexPages.hasMessages() too.
(more and more stupid stuff at that point) ;) Class and function are marked as isTest / testmethod? Is that the only trigger on the object? If there are more before insert - you can't guarantee the order, maybe something fails over there?
...
Related
My DES model was working well till I've added a new function (just like many others added before) and the model gave me all these error messages:
I wonder what do these messages mean and what made all of them appear suddenly. I was running the model just before it without any errors. Is there a way I could get my model back? :)
Thank you.
Edit 1:
Here is the newly created function: Newly created function
Given that I've ignored it, and made all the callings of it as comments, then the errors were still appearing. I've closed this model version and opened a backup then created the same function and the whole scenario happened again with the same 94 errors.
Is there something I need to change in the function itself?
The function checks the number of agents in certain parts of the model to stop delay blocks accordingly. That's done by adding variables that increment on exiting of agents and get the difference between them.
Typically this is caused by a syntax error in the new function that you created. See if you ignore this and then compile if it gets fixed.
Most of the time the compiler is smart enough to show you the error, but often if you replace a { with [ or you leave out a { the compiler get very confused and gives you 100s of errors.
If yous still cant find the issue maybe paste the code of your new function in the question.
Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.
TGUITestRunner form represents DUnit test results and created once by GUITestRunner.RunTest procedure:
procedure RunTest(test: ITest);
begin
with TGUITestRunner.Create(nil) do
begin
try
Suite := test;
ShowModal;
finally
Free;
end;
end;
end;
I want to extend it at runtime by writing colored status messages. It is possible, because that status messages at the bottom of GUI are placed into TRichEdit. So I need to get the pointer to this form somewhere in my TTestCase.
Can I do that without fixing the DUnit's code? Maybe you can recommend some hack?
A "decoupled" way to do it might be to use some "embedded codes" inside your status messages:
Status('<blue>Testing');
From within the dUnit test framework, you could check if the initial character of a status message is '<', and extract the arguments such as color or whatever else, and then modify dUnit to handle it.
That way, your tests will still run on an unmodified dUnit test runner. Several years from now you might want to move to the latest dUnit test, and I don't recommend making any API changes, or trying to access the test runner objects. The APIs and things you can see from within a test are strictly controlled on purpose. It's a principle of proper object oriented design, which the creators of jUnit/xUnit/dUnit believe in intensely.
I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";
I run into this regularly, and am just looking for best practice/approach. I have a database / datamodule-containing app, and want to fire up the database/datasets on startup w/o having "active at runtime" set to true at design time (database location varies). Also run a web "check for updates" routine when the app starts up.
Given TForm event sequences, and results from various trial and error, I'm currently using this approach:
I use a "Globals" record set up in the main form to store all global vars, have one element of that called Globals.AppInitialized (boolean), and set it to False in the Initialization section of the main form.
At the main form's OnShow event (all forms are created by then), I test Globals.AppInitialized; if it's false, I run my "Initialization" stuff, and then finish by setting Globals.AppInitialized := True.
This seems to work pretty well, but is it the best approach? Looking for insight from others' experience, ideas and opinions. TIA..
I generally always turn off auto creation of all forms EXCEPT for the main form and possibly the primary datamodule.
One trick that I learned you can do, is add your datamodule to your project, allow it to auto-create and create BEFORE your main form. Then, when your main form is created, the onCreate for the datamodule will have already been run.
If your application has some code to say, set the focus of a control (something you can't do on creation, since its "not visible yet") then create a user message and post it to the form in your oncreate. The message SHOULD (no guarantee) be processed as soon as the forms message loop is processed. For example:
const
wm_AppStarted = wm_User + 101;
type
Form1 = class(tForm)
:
procedure wmAppStarted(var Msg:tMessage); message wm_AppStarted;
end;
// in your oncreate event add the following, which should result in your wmAppStarted event firing.
PostMessage(handle,wm_AppStarted,0,0);
I can't think of a single time that this message was never processed, but the nature of the call is that it is added to the message queue, and if the queue is full then it is "dropped". Just be aware that edge case exists.
You may want to directly interfere with the project source (.dpr file) after the form creation calls and before the Application.Run. (Or even earlier in case.)
This is how I usually handle such initialization stuff:
...
Application.CreateForm(TMainForm, MainForm);
...
MainForm.ApplicationLoaded; // loads options, etc..
Application.Run;
...
I don't know if this is helpful, but some of my applications don't have any form auto created, i.e. they have no mainform in the IDE.
The first form created with the Application object as its owner will automatically become the mainform. Thus I only autocreate one datamodule as a loader and let this one decide which datamodules to create when and which forms to create in what order. This datamodule has a StartUp and ShutDown method, which are called as "brackets" around Application.Run in the dpr. The ShutDown method gives a little more control over the shutdown process.
This can be useful when you have designed different "mainforms" for different use cases of your application or you can use some configuration files to select different mainforms.
There actually isn't such a concept as a "global variable" in Delphi. All variables are scoped to the unit they are in and other units that use that unit.
Just make the AppInitialized and Initialization stuff as part of your data module. Basically have one class (or datamodule) to rule all your non-UI stuff (kind of like the One-Ring, except not all evil and such.)
Alternatively you can:
Call it from your splash screen.
Do it during log in
Run the "check for update" in a background thread - don't force them to update right now. Do it kind of like Firefox does.
I'm not sure I understand why you need the global variables? Nowadays I write ALL my Delphi apps without a single global variable. Even when I did use them, I never had more than a couple per application.
So maybe you need to first think why you actually need them.
I use a primary Data Module to check if the DB connection is OK and if it doesn't, show a custom component form to setup the db connection and then loads the main form:
Application.CreateForm(TDmMain, DmMain);
if DmMain.isDBConnected then
begin
Application.CreateForm(TDmVisualUtils, DmVisualUtils);
Application.CreateForm(TfrmMain, frmMain);
end;
Application.Run;
One trick I use is to place a TTimer on the main form, set the time to something like 300ms, and perform any initialization (db login, network file copies, etc). Starting the application brings up the main form immediately and allows any initialization 'stuff' to happen. Users don't startup multiple instances thinking "Oh..I didn't dbl-click...I'll do it again.."