I am testing a parser I have written in Scala using ScalaTest. The parser handles one file at a time and it has a singleton object like following:
class Parser{...}
object Resolver {...}
The test case I have written is somewhat like this
describe("Syntax:") {
val dir = new File("tests\\syntax");
val files = dir.listFiles.filter(
f => """.*\.chalice$""".r.findFirstIn(f.getName).isDefined);
for(inputFile <- files) {
val parser = new Parser();
val c = Resolver.getClass.getConstructor();
c.setAccessible(true);
c.newInstance();
val iserror = errortest(inputFile)
val result = invokeparser(parser,inputFile.getAbsolutePath) //local method
it(inputFile.getName + (if (iserror)" ERR" else " NOERR") ){
if (!iserror) result should be (ResolverSuccess())
else if(result.isInstanceOf[ResolverError]) assert(true)
}
}
}
Now at each iteration the side effects of previous iterations inside the singleton object Resolver are not cleaned up.
Is there any way to specify to scalatest module to re-initialize the singleton objects?
Update: Using Daniel's suggestion, I have updated the code, also added more details.
Update: Apparently it is the Parser which is doing something fishy. At subsequent calls it doesn't discard the previous AST. strange. since this is off topic, I would dig more and probably use a separate thread for the discussion, thanks all for answering
Final Update: The issue was with a singleton object other than Resolver, it was in some other file so I had somehow missed it. I was able to solve this using Daniel Spiewak's reply. It is dirty way to do things but its also the only thing, given my circumstances and also given the fact I am writing a test code, which is not going into production use.
According to the language spec, no, there is no way to recreate singleton objects. However, it is possible to reflectively invoke the constructor of a singleton, which overwrites the internal MODULE$ field which contains the actual singleton value:
object Test
Test.hashCode // => e.g. 779942019
val c = Test.getClass.getConstructor()
c.setAccessible(true)
c.newInstance()
Test.hashCode // => e.g. 1806030550
Now that I've shared the evil secret with you, let me caution you never, ever to do this. I would try very very hard to adjust the code, rather than playing sneaky tricks like this one. However, if things are as you say, and you really do have no other option, this is at least something.
ScalaTest has several ways to let you reinitialize things between tests. However, this particular question is tough to answer without knowing more. The main question would be, what does it take to reinitialize the singleton object? If the singleton object can't be reinitialized without instantiating a new singleton object, then you'd need to make sure each test loaded the singleton object anew, which would require using custom class loaders. I find it hard to believe someone would design something that way, though. Can you update your question with more details like that? I'll take a look again later and see if the extra details makes the answer more obvious.
ScalaTest has a runpath that loads classes anew for each run, but not a testpath. So you'll have to roll your own. The real problem here is that someone has designed this in a way that it is not easily tested. I would look at loading Resolver and Parser with a URLClassLoader inside each test. That way you'd get a new Resolver each test.
You'll need to take Parser & Resolver off of the classpath and off of the runpath. Put them into a directory of their own. Then create a URLClassLoader for each test that points to that directory. Then call findClass("Parser") on that class loader to get it. I'm assuming Parser refers to Resolver, and in that case the JVM will go back to the class loader that loaded Parser to get Resolver, which is your URLClassLoader. Do a newInstance on the Parser to get the instance. That should solve your problem, because you'll get a new Resolver singleton object for each test.
No answer, but I do have a simple example of where you might want to reset the singleton object in order to test the singleton construction in multiple, potential situations. Consider something stupid like the following code. You may want to write tests that validates that an exception is thrown when the environment isn't setup correctly and also write a test validates that an exception does not occur when the environment is not setup correctly. I know, I know everyone says, "Provide a default when the environment isn't setup correctly." but I DO NOT want to do this; it would cause issues because there would be no notification that you're using the wrong system.
object RequiredProperties extends Enumeration {
type RequiredProperties = String
private def getRequiredEnvProp(propName: String) = {
sys.env.get(propName) match {
case None => throw new RuntimeException(s"$propName is required but not found in the environment.")
case Some(x) => x
}
}
val ENVIRONMENT: String = getRequiredEnvProp("ENVIRONMENT")
}
Usage:
Init(RequiredProperties.ENVIRONMENT)
If I provided a default then the user would never know that it wasn't set and defaulted to the dev environment. Or something along these lines.
Related
I've been using akka-http for a while now, and so far I've mostly logged things using scala-logging by extending either StrictLogging or LazyLogging and then calling the:
log.info
log.debug
....
This is kinda ok, but its hard to understand which logs were generated for which request.
As solutions for this go, I've only seen:
adding an implicit logging context that gets passed around (this is kinda verbose and would force me to add this context to all method calls) + custom logger that adds the context info to the logging message.
using the MDC and a custom dispatcher; in order to implement this approach one would have to use the prepare() call which has just been deprecated.
using AspectJ
Are there any other solutions that are more straightforward and less verbose ? It would be ok to change the logging library btw..
Personally I would go with implicit context approach. I'd start with:
(path("api" / "test") & get) {
val context = generateContext
action(requestId)
}
Then I'd would make it implicit:
(path("api" / "test") & get) {
implicit val context = generateContext
action
}
Then I would make the context generation a directive, like e.g.:
val withContext: Directive1[MyContext] = Directive[Tuple1[MyContext]] {
inner => ctx => inner(Tuple1(generateContext))(ctx)
}
withContext { implicit context =>
(path("api" / "test") & get) {
action
}
}
Of course, you would have to take context as an implicit parameter to every action. But, it would have some advantages over MDC and AspectJ - it would be easier to test things, as you just need to pass value. Besides, who said you only ever need to pass request id and use it for logging? The context could as well pass data about logged in user, its entitlements and other things that you could resolve once, use even before calling action and reuse inside action.
As you probably guessed, this would not work if you want the ability to e.g. remove logging completely. In such case AspectJ would make more sense.
I would have most doubts with MDC. If I understand correctly it has build in assumption that all logic would happen in the same thread. If you are using Futures or Tasks, could you actually guarantee such thing? I would expect that at best all logging calls would happen in the same thread pool, but not necessarily the same thread.
Bottom line is, all possible posiltions would be some variant of what you already figured out, so the question is your exact use case.
Am having a little trouble understanding what and what cannot be done using FakeItEasy. Suppose I have a class
public class ToBeTested{
public bool MethodToBeTested(){
SomeDependentClass dependentClass = new SomeDependentClass();
var result = dependentClass.DoSomething();
if(result) return "Something was true";
return "Something was false";
}
}
And I do something like below to fake the dependent class
var fakedDepClass = A.Fake<DependentClass>();
A.CallTo(fakedDepClass).WithReturnType<bool>().Returns(true);
How can i use this fakedDepClass when am testing MethodToBeTested. If DependentClass was passed as argument, then I can pass my fakedDepClass, but in my case it is not (also this is legacy code that I dont control).
Any ideas?
Thanks
K
Calling new SomeDependentClass() inside MethodToBeTested means that you get a concrete actual SomeDependentClass instance. It's not a fake, and cannot be a FakeItEasy fake.
You have to be able to inject the fake class into the code to be tested, either (as you say) via an argument to MethodToBeTested or perhaps through one of ToBeTested's constructors or properties.
If you can't do that, FakeItEasy will not be able to help you.
If you do not have the ability to change ToBeTested (and I'd ask why you're writing tests for it, but that's an aside), you may need to go with another isolation framework. I have used TypeMock Isolator for just the sort of situation you describe, and it did a good job.
I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.
I'm implementing a variant of the JUnit New Test Suite Wizard, and instead of getting test classes from the current project, I need to get them from another source. They come to me as strings of fully-qualified class names.
Some of them may not yet exist in this user's workspace, let alone in the classpath of the current project. The user will need to import the projects for these later, but I don't want to mess with that in my wizard yet. I need to just add all classes to the new suite whether they exist yet or not.
For those classes that are already in this project's classpath, I can use IJavaProject.findType(String fullyQualifiedName) . Is there an analogous way to get ITypes for classes that are not (yet) visible?
I would be happy to construct an IType out of thin air, but ITypes don't seem to like being constructed.
I don't think that is possible: the Java Document Model interfaces are created based on the classpath.
Even worse, if the project do not exist in the users workspace, the resulting code would not compile, and that is another reason for not allowing the arbitrary creation of such constructs.
If I were you, I would try to help the user to import the non-existing projects in case of types are not available, thus avoiding the tackling with the Java Document Model.
For my purposes, creating a HypotheticalType and a HypotheticalMethod got the job done. I'm attaching an overview in case anyone else needs to follow this path.
First I created a HypotheticalType and had it implement the IType interface. I instantiated one of these at the proper spot in my modified wizard. Using Eclipse's Outline view I created a method breakpoint on all methods in my new class. This let me detect which methods were actually getting called during execution of my wizard. I also modified the constructor to take, as a String, the name of the class I needed the wizard to handle.
Almost all of the new methods are ignored in this exercise. I found that I could keep the default implementation (return null or return false in most cases) for all methods except the following:
the constructor
exists() - no modification needed
getAncestor(int) - no modification needed, but it might be useful to return the package of my hypothetical class, e.g. if my class is java.lang.Object.class, return java.lang.
getDeclaringType() - no modification needed
getElementName() - modified to return the class name, e.g. if my class is java.lang.Object.class, return Object.
getElementType() - modified to return IJavaElement.TYPE
getFlags() - not modified yet, but might be
getMethod(String, String[]) - modified to return a new HypotheticalMethod(name)
getMethods() - modified to return new IMethod[] { new HypotheticalMethod("dudMethod") }
In the process I discovered that I need to be able to return a HypotheticalMethod, so I created that type as well, inheriting from IMethod, and used the same techniques to determine which methods had to be implemented. These are the only methods that get called while this wizard runs:
The constructor - add a String parameter to bring in the name of the method
exists() - no modification needed
isMainMethod() - no modification needed
That covers the solution to my original question. Zoltán, I'll be doing as you suggested in an upcoming iteration and trying to assist the user in both the case in which the desired class is not yet in this project's classpath, and the case in which the desired class is in some project not yet in the workspace.
I am using ScalaMock and Mockito
I have this simple code
class MyLibrary {
def doFoo(id: Long, request: Request) = {
println("came inside real implementation")
Response(id, request.name)
}
}
case class Request(name: String)
case class Response(id: Long, name: String)
I can easily mock it using this code
val lib = new MyLibrary()
val mock = spy(lib)
when(mock.doFoo(1, Request("bar"))).thenReturn(Response(10, "mock"))
val response = mock.doFoo(1, Request("bar"))
response.name should equal("mock")
But If I change my code to
val lib = new MyLibrary()
val mock = spy(lib)
when(mock.doFoo(anyLong(), any[Request])).thenReturn(Response(10, "mock"))
val response = mock.doFoo(1, Request("bar"))
response.name should equal("mock")
I see that it goes inside the real implementation and gets a null pointer exception.
I am pretty sure it goes inside the real implementation without matchers too, the difference is that it just doesn't crash in that case (any ends up passing null into the call).
When you write when(mock.doFoo(...)), the compiler has to call mock.doFoo to compute the parameter that is passed to when.
Doing this with mock works, because all implementations are stubbed out, but spy wraps around the actual object, so, the implementations are all real too.
Spies are frowned upon in mockito world, and are considered code smell.
If you find yourself having to mock out some functionality of your class while keeping the rest of it, it is almost surely the case when you should just split it into two separate classes. Then you'd be able to just mock the whole "underlying" object entirely, and have no need to spy on things.
If you are still set on using spies for some reason, doReturn would be the workaround, as the other answer suggests. You should not pass null as the vararg parameter though, it changes the semantics of the call. Something like this should work:
doReturn(Response(10, "mock"), Array.empty:_*).when(mock).doFoo(any(), any())
But, I'll stress it once again: this is just a work around. The correct solution is to use mock instead of spy to begin with.
Try this
doReturn(Response(10, "mock"), null.asInstanceOf[Array[Object]]: _*).when(mock.doFoo(anyLong(), any[Request]))