VS Code doesn't print output properly - scala

I observed some kind of Heisenbug, I think: in VS Code, I run the following program:
object Main {
def main(args: Array[String]): Unit = {
print("Start...")
print("...end.")
// println()
// System.out.close()
}
}
If I run the program several times, say ten times, in the debug console tab about two times I get what I'd expect, but eight times I get no output at all.
If I uncomment either of the last two lines I get the correct output everytime.
And if I replace print("...end.") by print("...end.\nBye."), the output text Start......end. is printed everytime, but the output Bye. is only shown sporadically.
Do you know why that happens?
With IntelliJ IDEA I do not observe this behavior: there, I get the "correct" output everytime.

Related

How to capture stdout & stderr in Swift XCTest unit tests?

I’m writing a command-line tool in Swift. In my unit tests, I want to test the code’s output to stdout (and, secondarily, to stderr).
The examples I’ve found of using a Pipe() to copy or capture stdout output mostly rely on spawning a sub-process and assigning its stdout—and I don’t know where to begin figuring out how to deal with that in XCTest. But I have managed to pull together something that sort of begins to work without spawning a separate process.
What I’ve come up with so far is compiling and running, and with an ugly hack does sort of what I need—but just for one use (if I try to use it across multiple tests in the same run, all but the first fail). The hack is that, after the calls that write to stdout, I have to add a call to write to stderr to have the data already sent to stdout then be written to my capturedStdout variable.
It seems there ought to be a more appropriate way to get this to work.
Here’s what I’ve got so far:
import XCTest
final class StdoutCaptureTests: XCTestCase {
func testCapturePrint() {
let pipe = Pipe()
var capturedStdout = ""
// Setup to copy data going to stdout into capturedStdout.
setvbuf(stdout, nil, _IONBF, 0)
// Assign stdout’s pointer to the pipe:
dup2(pipe.fileHandleForWriting.fileDescriptor, STDOUT_FILENO)
// A closure for receiving the data sent to stdout:
pipe.fileHandleForReading.readabilityHandler = { handle in
if let str = String(data: handle.availableData, encoding: .utf8) {
capturedStdout += str
}
}
// The subject of the test outputs to stdout:
// (In this simplified example, just print "test" to stdout.)
print("test")
// Here’s where it gets weird...
// If none of the following lines are uncommented,
// the XCTAssertEqual test below will fail
// (nothing gets written to capturedStdout).
// Uncommenting the following line makes the XCTAssertEqual work:
//XCTAssert(false) // ⚠️
// (but that means the test is considered failed.)
// And this has no effect:
//XCTAssert(true) // 🛑
// fflush has no effect (when it gets nil as argument here,
// the man(ual) page says it “flushes all open streams.”)
//fflush(nil) // 🛑
// But this works and lets the test pass:
//FileHandle.standardError.write("".data(using: .utf8)!) // ✅
XCTAssertEqual(capturedStdout, "test\n")
// When the above commented lines are still commented:
// 🛑 XCTAssertEqual failed: ("") is not equal to ("test")
}
}
Given that it fails when used in subsequent test methods in the same run, I’m thinking there’s probably some cleanup I should be doing at the end of each test to restore stdout before the next test method is run.
Some of the references I’ve found:
using pipe() in Swift App to redirect stdout into a textView (only runs in simulator, not native)
Redirect Process stdout to Apple System Log Facility in Swift
https://www.hackingwithswift.com/forums/ios/redirecting-output-to-a-log-file-when-not-attached-to-debugger/5766

cannot show the trace message in Output window when running unit test

cannot show the trace message in Output window when running unit test
tried to use the output file in the property setting, but it does not work
Run a unit test method, it cannot show the trace log messages in Output window, but it can show in debug model.
//[TestCategory("XXXNonCritical"), TestMethod]
// public void XXXContainer
// {
//test code
// Assert.IsTrue(metaSettingsData.Any());
// }
expected to see the trace log messages in Output window
there is a link of Output in the unit test result where you can see the log.

"Neither element nor any descendant has keyboard focus" when running XCTestCase in a real iPhone

I'm trying to run a UI test case where there are two input fields exists. Following is my code
let usernameTextField = app.webViews.otherElements["Identity Server"].textFields["Username"]
let passwordField = app.webViews.otherElements["Identity Server"].secureTextFields["Password"]
_ = usernameTextField.waitForExistence(timeout: 8)
usernameTextField.tap()
usernameTextField.typeText("TEST") // Breakpoint line
app.typeText("\n")
passwordField.tap()
passwordField.typeText("test")
When I run the test case normally it is failing with the error given in the question title. But if I add a breakpoint to the commented line, it will run without any error.
I tried to use following code snippets after the breakpoint line, separately.
sleep(8)
_ = passwordField.waitForExistence(timeout: 8)
But non of those work. As for further information, this is a Auth process scenario where those input fields resides in a web view.
I decided to answer myself rather than closing the question. I'll tell what went wrong in my code. The main mistake I have done was continueAfterFailure set as true. Because of that, the error shown in the wrong line not the actual error throwing line.
So the solution is,
continueAfterFailure = false
and
usernameTextField.tap()
sleep(2)
usernameTextField.typeText("TEST")
There should be a small waiting time till keyboard appears in the web view before type text.
Send the \n on the end of the string you send to the username text field:
usernameTextField.typeText("TEST\n")

CLIPS (clear) command fails / throws exception in pyclips

I have a pyclips / clips program for which I wrote some unit tests using pytest.
Each test case involes an initial clips.Clear() followed by the execution of real clips COOL code via clips.Load(rule_file.clp). Running each test individually works fine.
Yet, when telling pytest to run all tests, some fail with ClipsError: S03: environment could not be cleared. In fact, it depends on the order of the tests in the .py file. There seem to be test cases, that cause the subsequent test case to throw the exception.
Maybe some clips code is still "in use" so that the clearing fails?
I read here that (clear)
Clears CLIPS. Removes all constructs and all associated data structures (such as facts and instances) from the CLIPS environment. A clear may be performed safely at any time, however, certain constructs will not allow themselves to be deleted while they are in use.
Could this be the case here? What is causing the (clear) command to fail?
EDIT:
I was able to narrow down the problem. It occurs under the following circumstances:
test_case_A comes right before test_case_B.
In test_case_A there is a test such as
(test (eq (type ?f_bio_puts) clips_FUNCTION))
but f_bio_puts has been set to
(slot f_bio_puts (default [nil]))
So testing the type of a slot variable, which has been set to [nil] initially, seems to cause the (clear) command to fail. Any ideas?
EDIT 2
I think I know what is causing the problem. It is the test line. I adapted my code to make it run in the clips Dialog Windows. And I got this error when loading via (batch ...)
[INSFUN2] No such instance nil in function type.
[DRIVE1] This error occurred in the join network
Problem resided in associated join
Of pattern #1 in rule part_1
I guess it is a bug of pyclips that this is masked.
Change the EnvClear function in the CLIPS source code construct.c file adding the following lines of code to reset the error flags:
globle void EnvClear(
void *theEnv)
{
struct callFunctionItem *theFunction;
/*==============================*/
/* Clear error flags if issued */
/* from an embedded controller. */
/*==============================*/
if ((EvaluationData(theEnv)->CurrentEvaluationDepth == 0) &&
(! CommandLineData(theEnv)->EvaluatingTopLevelCommand) &&
(EvaluationData(theEnv)->CurrentExpression == NULL))
{
SetEvaluationError(theEnv,FALSE);
SetHaltExecution(theEnv,FALSE);
}

Await insert to finish before asserting

I have a model and wrote tests before I started. Now my problem is: While the functionality works, my tests are non-deterministic. Most of the time they work, but sometimes they don't. I assume it's because of the Future
But let's show you what I mean by example:
before {
db.run(animals.createTable)
}
after {
db.run(animals.dropTable)
}
"An animal" must "have a unique id" in
{
val setup = DBIO.seq(
animals.insert(Animal("Ape")),
animals.insert(Animal("Dog"))
)
db.run(setup)
val result = db.run(animals.tableQuery.result).futureValue
result shouldBe a[Seq[_]]
result distinct.length shouldEqual 2
result(0).id should not equal result(1).id
}
I assume sometimes the db.run(setup) finishes in time, but sometimes it does not, hence I then get an AssertionException "expected length was 2, actual 0". As said, to me it looks like a "race condition" here (I know that is not the correct terminus ;)).
So, what I tried was simply awaiting the result of the insert-statement like so:
Await.ready(db.run(setup), Duration.Inf)
But that doesn't change a thing. So why is that? Can somebody explain to me why Await does not block here? I assumed this to block and only execute the lines that come after, when the insert had been executed.
I also tried wrapping the assertions in an .onComplete-block, but no luck either.
Any hints for me?
I suspect your issue is that sometimes your before hook has not finished either, since it's also async. I would suspect that if you add an Await.ready to your future in the before block along with your setup block, the problem will go away.