How to organize tests into sections/batches/clusters - swift

I'm coming from languages where, in my unit test suite, I'll typically have one (unit) test file for each class, and organize the tests for that class into batches. For example, in Ruby/RSpec, I'll use describe/context blocks to clump tests that relate to (for example) a specific method or purpose together.
In Swift, as far as I can tell there's no equivalent functionality(?). From what I can tell, most swift code uses naming conventions eg: (func test_someMethod_doesFirstThing, test_someMethod_doesSecondThing) to indicate related tests, but nothing else.
Am I missing something? Is there a Swift syntax for organizing similar tests within a file beyond test func naming conventions?

Related

How to write Dart idiomatic utility functions or classes?

I am pondering over a few different ways of writing utility classes/functions. By utility I mean a part of code being reused in many places in the project. For example a set of formatting functions for the date & time handling.
I've got Java background, where there was a tendency to write
class UtilsXyz {
public static doSth(){...};
public static doSthElse(){...};
}
which I find hard to unit test because of their static nature. The other way is to inject here and there utility classes without static members.
In Dart you can use both attitudes, but I find other techniques more idiomatic:
mixins
Widely used and recommended in many articles for utility functions. But I find their nature to be a solution to infamous diamond problem rather than utility classes. And they're not very readable. Although I can imagine more focused utility functions, which pertain only Widgets, or only Presenters, only UseCases etc. They seem to be natural then.
extension functions
It's somehow natural to write '2023-01-29'.formatNicely(), but I'd like to be able to mock utility function, and you cannot mock extension functions.
global functions
Last not least, so far I find them the most natural (in terms of idiomatic Dart) way of providing utilities. I can unit test them, they're widely accessible, and doesn't look weird like mixins. I can also import them with as keyword to give some input for a reader where currently used function actually come from.
Does anybody have some experience with the best practices for utilities and is willing to share them? Am I missing something?
To write utility functions in an idiomatic way for Dart, your options are either extension methods or global functions.
You can see that they have a linter rule quoting this problem:
AVOID defining a class that contains only static members.
Creating classes with the sole purpose of providing utility or otherwise static methods is discouraged. Dart allows functions to exist outside of classes for this very reason.
https://dart-lang.github.io/linter/lints/avoid_classes_with_only_static_members.html.
Extension methods.
but I'd like to unit test some utility functions, and you cannot test extension functions, because they're static.
I did not find any resource that points that the extension methods are static, neither in StackOverflow or the Dart extension documentation. Although extension can have static methods themselves. Also, there is an open issue about supporting static extension members.
So, I think extensions are testable as well.
To test extension methods you have 2 options:
Import the extension name and use the extension syntax inside the tests.
Write an equivalent global utility function test it instead and make the extension method call this global function (I do not recommend this because if someone changes the extension method, the test will not be able to caught).
EDIT: as jamesdlin mentioned, the extension themselves can be tested but they cannot be mocked since they need to be resolved in compile time.
Global functions.
To test global functions, just import and test it.
I think the global functions are pretty straightforward:
This is the most simple, idiomatic way to write utility functions, this does not trigger any "wtf" flag when someone reads your code (like mixins), even Dart beginners.
This also takes advantage of the Dart top-level functions feature.
That's why I prefer this approach for utility functions that are not attached to any other classes.
And, if you are writing a library/package, the annotation #visibleForTesting may fall helpful for you (This annotation is from https://pub.dev/packages/meta).

What are the differences between `Test Behaviours` and `Templates` in Citrus Framework?

What are the differences between Test Behaviours and Templates in Citrus Framework?
Both seem to be made to extract a common set of test actions and reuse it in any number of test cases.
One obvious and important difference is that Templates support parameters while Test Behaviours must share test variables with the test cases to exchange data.
Are there any other notable differences? Will one of the two disappear in the near future?
Templates are meant to be used in XML test cases only. TestBehaviors are meant to be used in JavaDSL test cases. You are free to add a local member variable to the TestBehavior implementation that is not shared with the test context and therefore not shared with the test case.
So bottom line both components provide exactly the same features - one for XML, one for JavaDSL.
Edit: Both are meant to remain in the framework in the future.

code generation using Treehugger scala

I am using TreeHugger to generate code at runtime. I could not find many documents related to it. My question is, if I generate classes using treehugger, will I be able to access those classes in future?
To be precise: I want to read data coming from files like CSV and create classes at runtime . Can I use that class in future, say in the next class generated at runtime.
I am really new to scala, please forgive if I am not clear in explaining.
Thanks a lot!
I've done something similar, so I'll share what I've learned:
Treehugger ultimately generates code (strings) at runtime to be used in a subsequent, separate run (or I suppose to be eval'd at runtime, but I never got that to work).
So the course of action depends on what you mean by "runtime":
Are your .csv files only available at runtime? If you have access to the files at compile time (as is often the case), then are examples of your two options: experimental (scala macros) or traditional (sbt plugin) -- both approaches are similar but have subtle pros and cons.
If you only have access to the files at runtime, but still need to generate and "type" the classes and make the compiler expect them, then it seems to me that somebody has made a bad design mistake! But if you find yourself stuck in this circumstance, then it is possible to define and load classes at runtime with a bytecode-engineering library and some type-checker black magic (runtime type provider).

BDD in Scala - Does it have to be ugly?

I've used lettuce for python in the past. It is a simple BDD framework where specs are written in an external plain text file. Implementation uses regex to identify each step, proving reusable code for each sentence in the specification.
Using scala, either with specs2 or scalatest I'm being forced to write the the specification alongside the implementation, making it impossible to reuse the implementation in another test (sure, we could implement it in a function somewhere) and making it impossible to separate the test implementation from the specification itself (something that I used to do, providing acceptance tests to clients for validation).
Concluding, I raise my question: Considering the importance of validating tests by clients, is there a way in BDD frameworks for scala to load the tests from an external file, raising an exception if a sentence in the test is not implemented yet and executing the test normally if all sentences have been implemented?
I've just discovered a cucumber plugin for sbt. Tests would be implemented under test/scala and specifications would be kept in test/resources as plain txt files. I'm just not sure on how reliable the library is and if it will have support in the future.
Edit:
The above is a wrapper for the following plugin wich solves perfectly the problem and supports Scala.
https://github.com/cucumber/cucumber-jvm
This is all about trade-offs. The cucumber-style of specifications is great because it is pure text, that easily editable and readable by non-coders.
However they are also pretty rigid as specifications because they impose a strict format based on features and Given-When-Then. In specs2 for example we can write any text we want and annotate only the lines which are meant to be actions on the system or verification. The drawback is that the text becomes annotated and that pending must be explicitly specified to indicate what hasn't been implemented yet. Also the annotation is just a reference to some code, living somewhere, and you can of course use the usual programming techniques to get reusability.
BTW, the link above is an interesting example of trade-off: in this file, the first spec is "uglier" but there are more compile-time checks that the When step uses the information from a Given step or that we don't have a sequence of Then -> When steps. The second specification is nicer but also more error-prone.
Then there is the issue of maintaining the regular expressions. If there is a strict separation between the people writing the features and the people implementing them, then it's very easy to break the implementation even if nothing substantial changes.
Finally, there is the question of version control. Who owns the document? How can we be sure that the code is in sync with the spec? Who refactors the specification when required?
There is no, by far, perfect solution. My own conclusion is that BDD artifacts should be in the hand of developers and verified by the other stakeholders, reading the code directly if it's readable or reading an html/pdf output. And if the BDD artifacts are owned by developers they might as well use their own tools to make their life easier with verification (using a compiler when possible) and maintenance (using automated refactorings).
You said yourself that it is easy to make the implementation reusable by the normal methods Scala provides for this kind of stuf (methods, functions, traits, classes, types ...), so there isn't really a problem there.
If you want to give a version without code to your customer, you can still give them the code files, and if they can't ignore a little syntax, you probably could write a custom reporter writing all the text out to a file, maybe even formatted with as html or something.
Another option would be to use JBehave or any other JVM based framework, they should work with Scala without a problem.
Eric's main design criteria was sustainability of executable specification development (through refactoring) and not initial convenience due to "beauty" of simple text.
see http://etorreborre.github.io/specs2/
The features of specs2 are:
Concurrent execution of examples by default
ScalaCheck properties
Mocks with Mockito
Data tables
AutoExamples, where the source code is extracted to describe the example
A rich library of matchers
Easy to create and compose
Usable with must and should
Returning "functional" results or throwing exceptions
Reusable outside of specs2 (in JUnit tests for example)
Forms for writing Fitnesse-like specifications (with Markdown markup)
Html reporting to create documentation for acceptance tests, to create a User Guide
Snippets for documenting APIs with always up-to-date code
Integration with sbt and JUnit tools (maven, IDEs,...)
Specs2 is quite impressive in both design and implementation.
If you look closely you will see the DSL can be extended while you keep the typesafe-ty and strong command of domain code under development.
He who leaves aside the "is more ugly" argument and tries this seriously will find power.
Checkout the structured forms and snippets

Scala: Lazy baking and runtime compilation of cake pattern

One of the great limitations of the cake pattern is that its static. I would like to be able to mix-in traits potentially written by different coders completely independently. However the traits would not need to be mixed-in frequently. The user would have an initialisation screen where they would choose the traits / assemblies, before the main application was run. So the thought occurred to me why not mix-in and compile the chosen traits from with in the user choice selection module. If the compilation failed, no problem the user would just get back some message - incompatible assemblies or what ever. If the compilation succeeded then the top UI module would load the newly compiled classes with the pre-compiled parts of the assemblies and run the main application. Note there might only need to be one or two classes compiled duruing run time initialisation. All the rest of the code could have been compiled normally.
I'm pretty new to Scala. Is this a recognised pattern? Is there any support for it? It seems mad to have to use Guice for a relative simple dependency situation. Can I run the Scala compiler easily from within an application? Can I run it in memory and its outputs be used from memory without unnecessary file creation?
Note: Although appearing to be dynamic, this methodology would remain 100% static.
Edit it occurs to that one of the drives of Microsoft's Roslyn project was to enable just this sort of thing for C# and Visual Basic. But that seems to have been a pretty big project even for a high powered Microsoft team.
Calling the compiler directly from within Scala is doable, but not for the timid. Luckily, the good people at Twitter have automated the process for you. (140 character celebrity micro-blogging, and some cool Scala utilities! Thanks Twitter.) You can use the com.twitter.utils.Eval class to compile and evaluate Scala strings. In your example, you would do something like
val eval = new Eval()
val myObj = eval[BaseClass]("new BaseClass extends " + traitNameList.mkString(" with "))
This will create you a new object with all of the traits you desire built in. The question then arises as to whether this is a good idea. Downsides:
Calling out to the Scala compiler is not quick
If you do this enough, you will overload the PermGen space, as the classes you create will never be garbage collected
This really is more of the sort of thing you want a dynamic language for rather than Scala. You're likely to find places where this all kinds of works, but clashes with the rest of your architecture (yes, that's vague).