I have a longish test with several sections of this form:
let value = computationThatLogs()
XCTAssertEqual(value.a, 77)
I observe that the file:line:error messages from violated XCTAsserts frequently get mixed into the debug messages of the computation in the next section, often even on the same line as a debug message!
Why is that? How can I prevent that from happening? fflush(stderr); fflush(stdout) before every section does not seem to be enough.
In case that is relevant, the computations in each section are actually asynchronous; the test (i.e. the main thread) waits for them to complete using a DispatchSemaphore.
I observe this in AppCode; can not reproduce with XCode.
There are bugs in AppCode related to this:
OCUnit: synchronize stderr/stdout
XCTest parser incorrectly handles stderr output after assertions when continueAfterFailure is true
Related
When using VSCode to build a Flutter app that makes calls to an API every call auto break points which is incredibly annoying. By auto breaking, I mean that the code halts on lines that do not have break points set on them for code that is not ours - in this case it is framework code (browser_client.dart)
Success Response
Error Response
There is a weird yellow prompt that appears which I guess means current line but its icons pop up all over the place at random locations in code for no discernable reason other than to be very, very annoying. If the code errors out then I would expect the flow to fall through to the try..catch handler wrapping all these calls which it does but not before halting here.
If the call succeeds and it happens to be returning a list of 100 items, it will halt 100 times forcing us to F5 through the loop as it is read. By this I mean it will halt 100 times on the line shown in the success response image above, not in the loop of our application code. It's also only making one call so its not a case of it making 100 calls to the API and halting each time.
Debug only My Own Code is turned on and breakpoints are set only within our own code.
What setting is causing this halting issue and how do we turn it off?
I’m using AsyncMachine to model multiple objects interacting asynchronously and in principle, all works as expected, really a cool extension ;-)
However, when I use logging, the millisecond delays which are reported in log entries between processing multiple, asynchronous events are higher than I would expect, so I’m wondering whether this is due to logging output created by a blocking call to e.g. logger.info(). So the timings I’m trying to obtain by looking at log entry timestamps could be distorted by the creation of exactly those logs.
Using https://pypi.org/project/aiologger/ seems like a reasonable way forward, given that aiologger implements non-blocking logging specifically for asyncio.
After a quick look at pytransitions source code I’m wondering however what would happen if pytransitions itself still uses the logging module from the standard library while my code would use the logger offered by aiologger. My guess is that in this case, only the logs created by my code would be non-blocking, so how could I make pytransitions use aiologger as well?
Thanks in advance for any help,
Eradian
so how could I make pytransitions use aiologger as well?
As of September 2022 there is no way of making transitions work with aiologger. Due to backward compatibility reasons, transitions uses the %-formatting style for debug messages:
logging.warn("Object %s is initialized with value %d", object.name, object.value)
Theoretically you could override the internal logger instance used by AsyncMachine:
from aiologger import Logger
import transitions.extensions.asyncio as taio
taio._LOGGER = Logger.with_default_handlers(name='transitions.async')
# ...
m = taio.AsyncMachine(states=['A', 'B', 'C'], initial='A')
But this will throw errors during the first logging attempt because of the aforementioned reason:
Invalid LogRecord args type: <class 'str'>. Expected Mapping
Hi all,
I am facing a strange error message while debugging a code for functional coverage specifically transition coverage.There are two level pins for fifo1 and fifo2 respectively while doing coverage for the first level pin ie level1 the code is parsed successfully but for level2 pin its throwing an error which says:
***Error:Syntax error(probably an infinite recursion in macro expansion)
Before loading your code, do trace macro. This will show which macros are being expanded. Look in your docs for more details.
Also, unless you're just writing some simple prototyping code, 'tick notation' for accessing signals is VERY slow. It's the old method. Cadence's recommendation is to use ports instead of 'tick access'. We sped up our test runs by a factor of ~3-10x ( can't remember precisely) by using ports instead of ticks when we did the switch back in version 6.01 of Specman.
I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";
I have a code in Visual Studio 2008 in C++ that works with files just by fopen and fclose.
Everything works perfect in Debug mode. and I have tested with several datasets.
But it doesn't work in release mode. It crashes all the time.
I have turned off all the optimizations, also there is no dependency to anything(in the linker), and also I have set these:
Optimization: Disabled(/Od)
Keep Unreferenced Data.
Do Not Remove Redundant
Optimize for Windows98: NO
I still keep wondering how it should not work under these circumstances.
What else should I turn off to let it work as in debug mode?
I think if it works in release mode but not in debug mode, it might be a coding fault but the other way looks weird. isn't it?
I appreciate any help.
--Nima
Debug modes often initialize heap data allocations. The program might be dependent on this behavior. Look for variables and buffers that are not getting initialized.
1) Double check any and all code that depends on preprocessor macros.
2) Use assert() for verify program state preconditions. These must not be expected to impact program flow (ie. removing the check would still allow the code to provide the same end result) because assert is a macro. Use regular run-time conditionals when an assert won't do.
3) Indeed, never leave a variable in an uninitialized state.
By far the most likely explanation is differing undefined behavior in the two modes caused by uninitialized memory. Lack of thread safety and problems with synchronization code can also exhibit this kind of behavior because of differing timing environments between debug and release, but if your program isn't multi-threaded then obviously this can't be it.
I had experienced this and in my case it was because of one of my array of struct which suppose to have only X index, but my looping which check this struct was over checking to X+1 index. Interesting is debugging mode was running fine though I was on Visual C++ 2005.
I spent a few hours by putting in printf into my coding line by line to catch the bug. Anyone has good way to debug this kind of error please let me know.