I am writing some performance profiling (benchmark) test code as suggested here: https://docs.flutter.dev/cookbook/testing/integration/profiling. However, the documentation suggests I should run it like:
flutter drive --driver=test_driver/perf_driver.dart --target=integration_test/scrolling_test.dart --profile
As we know, that will need even minutes to run once even if only changed one line of code, because it goes through the whole Android/iOS compilation. That takes a lot of time when debugging the correctness of my test code.
On the other hand, we know Flutter has powerful hot reload/restart which only takes seconds to reload code changes. Can we utilize that to easily debug benchmark test code?
I would like to share my approach here:
You can run
flutter run integration_test/scrolling_test.dart --no-dds
and that is all. You can hot restart now for free! (The --no-dds in my case is needed because otherwise it errors; but in your case it may not be needed.)
But bear in mind some drawbacks:
To allow hot restart, it should be run in debug mode, but profiling/benchmarking should only be run in profile mode or the numbers are completely useless. Thus, only debug the correctness of your code and never trust the numbers in this mode.
It will not collect data and write to host machine after tests finished in this approach, because no driver (e.g. test_driver/perf_driver.dart) exists. But since we should not use the numbers in this mode, it is no problem.
Related
I have built and started an OPC UA embedded Openmodelica server with the BouncingBall model like so:
$ omc +s path/to/model
$ ./BouncingBall -embeddedServer=opc-ua -rt=1
Now I'm trying to interact with it using an OPCUA client. However, I don't understand how I'm supposed to interact with the server properly. As far as I know this is undocumented.
The most promising approach seems to be to set enableStopTime to false and run to true. Then the simulation seems to run indefinitely and the values seem to make sense. It seems I'm only able to extract the values in real time however. While running, when I set run to false it seems that the server enters an erroneous state and it refuses to give any values back.
If I restart the executable and instead set step to true nothing seems to change and after trying to set step to true a second time the server becomes unresponsive. The -rt=1 option doesn't seem to matter. Seems like it enters the same state as above (1).
(After restart) If I leave enableStopTime to be true and set run to true the simulation runs to stop and then the server quits with message The simulation finished successfully. Maybe this is intended. Kind of seems odd. Would make sense to be able to restart the simulation or trigger it with new options.
What I would hope to be able to do: Start and stop simulation as well as rewind to a certain point to check the value at that point. It seems to me that the API "affords" this functionality and it could probably be provided by hackingly wrapping the executable and API. Are the above bugs or intended? What is the intended way to interact with an OPC UA server in these cases?
The OpenModelica compiler version is 1.16.0~1-g84b4a71
Please try the latest nightly build
It includes the following commit.
That might solve it. I believe things worked without subscriptions before, since I could never reproduce this without them.
(By the way, do people go on our git commit feed and try to reproduce bugs fixed in the last 24 hours; we quite often get questions that were just recently fixed)
I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?
This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.
Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.
To set the seem, I'm an experienced developer and have coded many languages over the years, including a good bit of Perl back in late 90's early 00's. Since then I haven't touched Perl, but now have a client who wants some changes making to an existing open source project built using Perl5 and Catalyst. I've quickly worked through the Catalyst tutorials, read a few books online and am now starting to feel my way.
I have the existing project up and running on a clean Debian Wheezy VM and am testing the code an my changes using the Catalyst Development Server.
While working through the tutorials and writing a few test apps, the development server would always output a lot of useful information when run, such as the configured routes etc.. But under this project, when I run the server I don't get a lot of output. I don't even get messages sent to $c->log->debug();
I run the server with the following command:
perl ./script/asnn_panel_server.pl -d -r
Which outputs the following:
HTTP::Server::PSGI: Accepting connections at http://0:3000/
I can access the server and the application is running fine.
In a test controller action I can try the following lines:
$c->log->debug("A test debug message");
print "A test print message\n";
The debug log message does not appear in my development server output, but the print line does. So I know the call to $c->log->debug() is not blowing up, because the next line is executing, but where is it going?
So essentially I feel I 'could' get more useful output from the Catalyst Development server, but am not.
I have googles but can't find anything of relevance. Sorry if I'm going in the wrong direction here, I do know what I doing in general, but have a lot to pick up here in a short amount of time!
I suspect my issues might be specific to the open source project I'm working on, but there's not a lot of help to be had from that direction. Could anyone give me any pointers as to what to investigate?
UPDATE : I now realise that the application is using log4perl, which is configured to send $c->log->debug() to syslog. I still don't know why the Catalyst Development server isn't providing much output.
:wq
For anyone coming upon this later, if you want to see the developer debug stream (stuff about the routes and classes and models your application is using, etc you need to be in debug mode, which you can do easily by setting CATALYST_DEBUG=1 in your env (I often start my app like "CATALYST_DEBUG=1 perl -Ilib script/myapp_server.pl"
There is sadly a difference between debug as a log level and debugging mode. The way catalyst works is that if you are in debugging mode (via CATALYST_DEBUG=1, or any of the other documented ways this gets turned on) all this debugging stream gets sent to the log, most of it logged at the debug level (again debug as a log level is distinct from debugging developer mode :( ) So you need both debugging mode and your logger should be set to listen at the debug level.
If you use the default catalyst log, it is debug level by default, so doing CATALYST_DEBUG=1 is all you need. If you use a different logger be sure to enable debug log level for your development setup, if you wish to see those developer stream logs.
Messages sent to $c->log->debug() are generally disabled in production environments. If it doesn't seem to matter whether you start your scripts with or without the -d switch, then I'd suggest something downstream in the sequence is setting the environment variable CATALYST_DEBUG to 0 or undef unilaterally.
That said, you should be able to see the output of $c->log->info() or $c->log->warn() calls. The answer to that question should help you determine if the problem is log4perl or Catalyst related.
Hopefully that will get you on your way.
We often use test suites to ensure we start from a known, working situation.
In this case, we want the whole test suite to run to test all the pages. If some fail, we still want the other tests to run.
The test suite seems to stop the moment it finds an error in one of the tests.
Can it be set-up to keep going and run all the tests, irrespective of results?
Check you are using Verify and not Assert. Assert will STOP your system
Each test case restart from a fixed point (like load base URL)
Set your timeout (how long you want the system to wait before moving on)