Why is ScalaCheck discarding so many generated values in my specification? - specs2

I have written a ScalaCheck test case within Specs2. The test case gives up because too many tests were discarded. However, it doesn't tell me why they were discarded. How can I find out why?

Set a breakpoint on the org.scalacheck.Gen.fail method and see what is calling it.
Incidentally, in my case the problem was twofold:
I had set maxDiscarded to a value (1) that was too small, because I was being too optimistic - I didn't realise that ScalaCheck would start at a collection of size 0 by default even if I asked for a non-empty collection (I don't know why it does this).
I was generating collections of size 1 and up, even though, as I later realised, they should have been of size 2 and up for what I was trying to test - which was causing further discards in later generators based on that generator.

Related

How to get nunit filters at runtime?

Does anybody know how to get list of categories (provided with 'where' filter to nunit-console) at runtime?
Depending on this, I need to differently initialize the test assembly.
Is there something static like TestExecutionContext that may contain such information?
The engine doesn't pass information on to the framework about "why" it's running a particular test... i.e. if it's running all tests or if it was selected by name or category. That's deliberately kept as something the test doesn't know about with the underlying philosophy being that tests should just run based on the data provided to them.
On some platforms, it's possible to get the command-line, which ran the test. With that info you could decode the various options and make some conclusions but it seems as if it would be easier to restructure the tests so they didn't need this information.
As a secondary reason, it would also be somewhat complicated to supply the info you want and to use it. A test may have multiple categories. Imagine a test selected because two categories matched, for example!
Is it possible that what you really want to do is to pass some parameters to your tests? There is a facility for doing that of course.
I think this is a bit of an XY problem. Depending on what you are actually trying to accomplish, the best approach is likely to be different. Can you edit to tell us what you are trying to do?
UPDATE:
Based on your comment, I gather that some of your initialization is both time-consuming and not needed unless certain tests are run.
Two approaches to this (or combine them):
Do less work in all your initialization (i.e. TestCase, TestCaseSource, SetUpFixture. It's generally best not to create your classes under test or initialize databases. Instead, simply leave strings, ints, etc., which allow the actual test to do the work IFF it is run.
2.Use a SetUpFixture in some namespace containing all the tests, which require that particular initialization. If you dont' run any tests from that namespace, then the initialization won't be done.
Of course both of the above may entail a large refactoring of your tests, but the legacy app won't have to be changed.

How to avoid arithmetic errors in PostgreSQL?

I have a PostgreSQL-powered web app that does some non-essential, simple calculations involving getting values from outside sources, multiplication and division for reporting purposes. Today an error where a multiplication that exceeded the value domain of a numeric( 10, 4 ) field led to an application crash. It would be much better if the relevant field had just been set to null and a notice be generated. The way the bug worked was that a wrong value in one field caused several views to become unavailable, and while a missing value in that place would have been sad but no big problem, the blocked view is still essential for the app to work.
Now I'm aware that in this particular case, setting that field to numeric( 11, 4 ) would have prevented the bailout, but that is, of course, only postponing the issue at hand. Since the error happened in a function call, I could also have written an exception handler; lastly, one could check either the multiplicands or the result for sane values (but that is in itself a little strange as I would either have to do a guess based on magnitudes or else do the multiplication in another numeric type that can probably handle a value whose magnitude is in principle not known to me with certainty, because external sources).
Exception handling is probably what this will boil down to, which, however, entails that all numeric calculations will have to be done via PL/pgSQL function calls, and will have to be implemented in many different places. None of the options seems particularly maintainable or elegant. So the question is: Can I somehow configure PostgreSQL to ignore some or all arithmetic errors and use default values in such cases? If so, can that be done per database or will I have to configure the server? If this is impossible or a Bad Idea, what are the best practices to avoid arithmetic errors?
Clarification This is not a question about how to rewrite numeric( 10, 4 ) so that the field can hold values of 1e6 and above, and also not so much about error handling in the application that uses the DB. It's more about whether there is an operator, a function call, a general configuration or a general pattern that is most commonly recommended to deal with situations where a (non-essential) computation normally results in a number (or in fact other value type) except with some inputs that cause exceptions, which is when the result could fully well and safely be discarded. Think Excel printing out #### when cell is too narrow for the digits to be displayed, or JavaScript giving you NaN in place of arithmetic errors. Returning null instead of raising an exception may be a bad idea in general programming but legitimate in specific case.
Observe that PostGreSQL error codes does have e.g. invalid_argument_for_logarithm, invalid_argument_for_ntile_function, division_by_zero all grouped together under Class 22 — Data Exception and does allow exception handling in function bodies, so I can also specifically ask: How to catch all class 22 exceptions short of listing all the error codes?, but then I still hope for a more principled approach.
Arguably the type numeric (without type modifiers) would be the right thing for you if you want to avoid overflows (that's what you seem to mean with “arithmetic error”) as much as possible.
However, there will still be the possibility of value overflows numeric format.
There is no way to configure PostgreSQL so that it ignores a numeric overflow.
If the result of an operation cannot be represented in a data type, there should be an error. If the data supplied by the application can lead to an error, the application should be ready to handle such an error rather than “crash”. Failure to do so is an application bug.

Scala PriorityQueue conflict resolution?

I'm working on a project that uses a PriorityQueue and A*. After digging around a ton I think part of the problem that I'm encountering while my search tries to solve my problem is in the PriorityQueue. I'm guessing that when it generates nodes of equal scoring (for example one earlier, and one later) it will chose the one from earlier rather than the one that was most recently generated.
Does anyone know if a PriorityQueue prioritizes the newest node if the scores are the same? If not, how can I make it do this?
Thanks!
PriorityQueue uses a heap to select the next element. Beyond that it makes no guarantees about how the elements are ordered. If it is important to you that nodes are ordered by addition order, you should keep a count of the number of items added and prioritize by the tuple (priority, -order).
If you do anything else, even if it happens to work now, it may break at any arbitrary time since the API makes no guarantees about how it chooses from among equal elements.

Specs2 + Scalacheck test failing due to many discarded

In a ScalaCheck + Specs2 based test, I need two dates whose distance (in days) it's at maximum of Int.MAX_VALUE.
I am using at the moment ScalaCheck provided arbitraries to generating two dates: since the date generator is backed by the Long generator, this leads to too many discarded cases, making my test to fail.
What is the right approach to solve the problem:
Shall I modify my generators or
Shall I modify the test parameters?
The best approach is probably to create your own generators for your domain.

Why does this ScalaQuery statement only delete the odd rows?

When attempting to delete a batch of records, only the odd rows are deleted!
val byUser = Orders.createFinderBy(_.userID)
byUser(id).mutate(_.delete)
If I instead print the record, I get the correct number of rows.
byUser(id).mutate{x => x.echo}
I worked around the issue like this, which generates the desired SQL.
(for{o <- Orders if o.userID is id.bind } yield o).delete
But, why or how does the mutate version affect only the odd rows?
I've had a dig around in the source code and it seems to be as #RexKerr says - an iterator is used to process the elements, applying the deletions as it iterates (the while loop in the mutate method here):
https://github.com/rjmac/scala-query/blob/master/src/main/scala/org/scalaquery/MutatingInvoker.scala
Interestingly there is a previousAfterDelete flag that can be used to force the iterator backwards after each deletion. This appears to be set to true for Access databases (see the AccessQueryInvoker class) but not others:
https://github.com/rjmac/scala-query/blob/master/src/main/scala/org/scalaquery/ql/extended/AccessDriver.scala
I would recommend downloading the sources and debugging the code. Perhaps this flag should be set for the database vendor you are using. I'd also consider filing a bug report:
http://scalaquery.org/community.html
PS. I know this is an old question but answered it just in case anyone else has had this problem