I've started using Akka with Scala to develop a set of interacting components in a bus-oriented architecture. I need to test the fault-tolerance of the system, and for that I was wondering if there is any way to use a probabilistic model of failure (i.e., set some failure parameters for each Actor) within a Scala test framework. Any ideas? Any framework out there that already implements this?
I assume you know thinks like Testkit and read the documentation at http://akka.io/docs/akka/1.3/scala/testing.html#akka-testkit (see also http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html )
You don't need Akka in the test setup, if I understood your problem right. Assume that Akka itself is tested and works OK. Now you only have to test your code. Since you didn't show code it's hard to give advice, but I will try:
you can test your method calls in different sequences, and assert the results. I would hardcode the sequences, but you can also randomize that.
show some code and I will clarify what I mean. I also could be wrong, if I understood your question wrong.
Related
I'm developing a Scala project at https://github.com/jonaskoelker/equate/ which gives you equality assertions for ScalaTest which print a diff of observed vs. expected if they're unequal. This is particularly useful for long strings and large case classes.
I'd like to publish one version of equate for ScalaTest v3.0.8 and one for ScalaTest v3.1.1.
What are best practices for doing so? My web searches came up empty. My own first idea is to publish two things with different names, where the name says which version of ScalaTest each thing is compatible with. Is there a better way?
My way seems rather low-tech. It seems to me that some grunt work could be automated away if the ScalaTest version information was encoded some other way. This seems obvious enough that someone else has probably thought about it and done something about it. I'd like to know what, such that I can release my code the smart way rather than the dumb way.
What are the differences between Test Behaviours and Templates in Citrus Framework?
Both seem to be made to extract a common set of test actions and reuse it in any number of test cases.
One obvious and important difference is that Templates support parameters while Test Behaviours must share test variables with the test cases to exchange data.
Are there any other notable differences? Will one of the two disappear in the near future?
Templates are meant to be used in XML test cases only. TestBehaviors are meant to be used in JavaDSL test cases. You are free to add a local member variable to the TestBehavior implementation that is not shared with the test context and therefore not shared with the test case.
So bottom line both components provide exactly the same features - one for XML, one for JavaDSL.
Edit: Both are meant to remain in the framework in the future.
I'm somewhat familiar with scala and less familiar with akka, although I know what actor models is (the idea seems quite simple).
So let's say that right now this is my code (in reality what I need is event sourcing application). I need to be able to use it from any language, not just from JVM.
So of course I googled about that and I've found this. The problem with that is that If my understanding is correct I would need to create some custom protocol, deserialization and dispatching for zmq messages and that is totally uncool. Maybe there exists solution for that already? If not, than how to do that in most efficient way? Maybe I need to create some message case classes and something like facade actor that would do deserialization?
class HelloActor extends Actor {
def receive = {
case "hello" => println("well, helllo!")
case _ => println("huh?")
}
}
object Main extends App {
val system = ActorSystem("HelloSystem")
val helloActor = system.actorOf(Props[HelloActor], name = "helloactor")
helloActor ! "hello"
helloActor ! "buenos dias"
}
There are many ways to do this, depends on the protocol you are using ect. For a language specific way is you can use Pyro. Just like java, you can serialize generic objects in python and then transfer them over the network, which you can use Pyro for. You can take advantage of the fact that python is implemented on both the jvm (Jython), and natively. Not sure if its a great idea to write this just in scala and python, I would create the API in java, and then add that to the scala classpath, then any other JVM language can also use your API. In addition it's more common to use jython with java so there are other benifits that come with being the majority.
But anway the common language that the jvm and python will understand will be these serialized python objects. So what you will need to know is:
How to use jython with java
How to use pyro
And yea using scala with jython is only a matter of adding the jars to the classpath, as you probably already know.
EDIT: Ok I think I might not have made this method clear enough. So basically:
JVM uses jython to create a jython instance, which is sent to a remote python object. The communication is done with the module Pyro. This program can send serialized python objects back as well.
This is what happens normally with remote actors in java, except the messages are implementing Serializable. Python and Java are not in the same process, or using native methods, or anything like that. They can be on the same machine or different machines. This method is not platform specific.
Hopefully this method is usefull to someone.
In my case the Akka actor solution was a little bit overkill, so I end up implementing my own event sourcing solution in this open source project.
The persistence layer is a decision for the developer, but I provide practical examples of execution using couchbase.
Take a look in case you consider useful.
https://github.com/politrons/Scalaydrated
I've used lettuce for python in the past. It is a simple BDD framework where specs are written in an external plain text file. Implementation uses regex to identify each step, proving reusable code for each sentence in the specification.
Using scala, either with specs2 or scalatest I'm being forced to write the the specification alongside the implementation, making it impossible to reuse the implementation in another test (sure, we could implement it in a function somewhere) and making it impossible to separate the test implementation from the specification itself (something that I used to do, providing acceptance tests to clients for validation).
Concluding, I raise my question: Considering the importance of validating tests by clients, is there a way in BDD frameworks for scala to load the tests from an external file, raising an exception if a sentence in the test is not implemented yet and executing the test normally if all sentences have been implemented?
I've just discovered a cucumber plugin for sbt. Tests would be implemented under test/scala and specifications would be kept in test/resources as plain txt files. I'm just not sure on how reliable the library is and if it will have support in the future.
Edit:
The above is a wrapper for the following plugin wich solves perfectly the problem and supports Scala.
https://github.com/cucumber/cucumber-jvm
This is all about trade-offs. The cucumber-style of specifications is great because it is pure text, that easily editable and readable by non-coders.
However they are also pretty rigid as specifications because they impose a strict format based on features and Given-When-Then. In specs2 for example we can write any text we want and annotate only the lines which are meant to be actions on the system or verification. The drawback is that the text becomes annotated and that pending must be explicitly specified to indicate what hasn't been implemented yet. Also the annotation is just a reference to some code, living somewhere, and you can of course use the usual programming techniques to get reusability.
BTW, the link above is an interesting example of trade-off: in this file, the first spec is "uglier" but there are more compile-time checks that the When step uses the information from a Given step or that we don't have a sequence of Then -> When steps. The second specification is nicer but also more error-prone.
Then there is the issue of maintaining the regular expressions. If there is a strict separation between the people writing the features and the people implementing them, then it's very easy to break the implementation even if nothing substantial changes.
Finally, there is the question of version control. Who owns the document? How can we be sure that the code is in sync with the spec? Who refactors the specification when required?
There is no, by far, perfect solution. My own conclusion is that BDD artifacts should be in the hand of developers and verified by the other stakeholders, reading the code directly if it's readable or reading an html/pdf output. And if the BDD artifacts are owned by developers they might as well use their own tools to make their life easier with verification (using a compiler when possible) and maintenance (using automated refactorings).
You said yourself that it is easy to make the implementation reusable by the normal methods Scala provides for this kind of stuf (methods, functions, traits, classes, types ...), so there isn't really a problem there.
If you want to give a version without code to your customer, you can still give them the code files, and if they can't ignore a little syntax, you probably could write a custom reporter writing all the text out to a file, maybe even formatted with as html or something.
Another option would be to use JBehave or any other JVM based framework, they should work with Scala without a problem.
Eric's main design criteria was sustainability of executable specification development (through refactoring) and not initial convenience due to "beauty" of simple text.
see http://etorreborre.github.io/specs2/
The features of specs2 are:
Concurrent execution of examples by default
ScalaCheck properties
Mocks with Mockito
Data tables
AutoExamples, where the source code is extracted to describe the example
A rich library of matchers
Easy to create and compose
Usable with must and should
Returning "functional" results or throwing exceptions
Reusable outside of specs2 (in JUnit tests for example)
Forms for writing Fitnesse-like specifications (with Markdown markup)
Html reporting to create documentation for acceptance tests, to create a User Guide
Snippets for documenting APIs with always up-to-date code
Integration with sbt and JUnit tools (maven, IDEs,...)
Specs2 is quite impressive in both design and implementation.
If you look closely you will see the DSL can be extended while you keep the typesafe-ty and strong command of domain code under development.
He who leaves aside the "is more ugly" argument and tries this seriously will find power.
Checkout the structured forms and snippets
I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.