VSTS Test fails but vstest.console passes; the assert executes before the code for some reason? - azure-devops

Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.

Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.

Related

Function block not updating variable

Alright, so I'm learning codesys in school and I'm using Function-Blocks. However they didn't seem to update when updating local variables, so I made a test, the one you can see below.
As you can see, in the FB below, "GVL.sw1" becomes True, but "a" doesen't. Why does it not become True? I tested a friends code and his worked just fine, but mine doesen't...
https://i.stack.imgur.com/IpPPZ.png
Comment from reddit
You are showing the source code for a program called "main". You have
a task running called "Main_Task". The program and task are not
directly related.
Is "main" being called anywhere.
So i added main to the "main task" and it worked. I have no idea why it didn't work in the real assignment but maybe I'll solve it now that i have gotten this far.
In your example you have 2 programs (PRG): main and PLC_PRG.
Creating a program doesn't mean that it will be executed/run. For that you need to add the program to a Task in the Task Configuration. Each Task will be, by default, executed on every cycle according to the priority they are configured with (you could also have them be executed on an event and etc. instead). When a Task is executed, each program added to that Task will be executed in the order they are placed (you can reorder them any time).
With that said, if you look at your Task Configuration, the MainTask only has the program PLC_PRG added, so only that program will run. The main program that you are inspecting is never even run.

Recursive Workflow in Powershell

I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.

pytest: are pytest_sessionstart() and pytest_sessionfinish() valid hooks?

are pytest_sessionstart(session) and pytest_sessionfinish(session) valid hooks? They are not described in dev hook docs or latest hook docs
What is the difference between them and pytest_configure(config)/pytest_unconfigure(config)?
In docs it is said:
pytest_configure(config)called after command line options have been parsed. and all plugins
and initial conftest files been loaded.
and
pytest_unconfigure(config) called before test process is exited.
Session is the same, right?
Thanks!
The bad news is that the situation with sessionstart/configure is not very well specified. Sessionstart in particular is not much documented because the semantics differ if one is in the xdist/distribution case or not. One can distinguish these situations but it's all a bit too complicated.
The good news is that pytest-2.3 should make things easier. If you define a #fixture with scope="session" you can implement a fixture that is called once per process within which test execute.
For distributed testing, this means once per test slave. For single-process testing, it means once for the whole test run. In either case, if you do a "--collectonly" run, or "-h" or other options that do not involve the running of tests, then fixture functions will not execute at all.
Hope this clarifies.

I want SSIS to trap an error, branch and then exit with success

I have googled around and think I have tried just about every suggestion I've seen but I can't seem to get this to work the way I want to. I'm CONVINCED that SSIS can do this.
I have an SSIS package that has 4 tasks, they execute in order: task A, then B, then C. The 4th task D I am using as an end-point if B fails... more on that in a minute.
Task A does some unrelated work. I'm including it because there's processing at the start of the package and should any of it fail, then I want the package to fail - just the SSIS default, exit the package with a return code of 1 if task A fails.
B is a DFT with a flat-file source, it transfers data to be processed in step C. B is the crux of my question because I want to deal with errors in task B differently than the rest. It's not uncommon for the flat-file source to not exist or to be corrupt and I want to capture/trap that but not cause the entire package to fail. If B does have an error though, I do NOT want to process task C.
C processes the data I transferred in DFT B when B does not have an error.
I am trapping errors in DFT-B with an "OnError" eventhandler that executes a SQL task that sends an email. I then use an "on completion" precidence constraint to divert the control flow to an endpoint - task D. (D does nothing, it's just a dummy end-point that I'm using partially for debugging, partially to give the event handler somewhere to "go" - I am not certain D is even required).
Anyway, when I run the package and trigger an error in B my package performs the event handler (sends the email just fine), it even continues on to task "D", and in the debugger, "D" ends with success (shows green). The problem is, my package fails with an exit code of "1".
I've tried messing with all sorts of things... ForceExecutionValue/ ForceExecutionResult/ MaximumErrorcount. Now I have a package that actually doesn't even contiune to my "task D" and I'm not sure how I got there (but the exception handler SQL is faithfully sending me an email!).
I do not want that exit code of 1! I want a 0!
help! thanks!
Have you tried something like this:
To get the different conditions on the transition lines, just double click on the line you wish to edit. A red line indicates the control flow in case the previous task fails
EDIT: I believe i misread the question! From what i got, you can already control the flow as intended. Your issue comes with the exit code. Have you tried setting the propagate variable to false in the event handler screen?
Check the following link for more information on it:
http://simonworth.wordpress.com/2009/11/11/ssis-event-handler-variables-propagate/

How should I deal with failing tests for bugs that will not be fixed

I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";