I am testing using Junit4 with eclipse. I want to test the function expandAll
public void expandAll(TreeExpansionModel<TreeData> expansionModel)
{
List<TreeNode<TreeData>> roots = getTreeModel().getRootNodes();
for (TreeNode<TreeData> root : roots)
{
expandAllNode(root, expansionModel);
}
}
private void expandAllNode(TreeNode<TreeData> node, TreeExpansionModel<TreeData> expansionModel)
{
if (node.getHasChildren())
{
expansionModel.markExpanded(node);
for (TreeNode child : node.getChildren())
{
expandAllNode(child, expansionModel); // this is a recursive call
}
}
}
The problem I am having is the expansionModel. In my program(not test), I pass in the expansionModel using tree.
Here is the code fragment from java.
#InjectComponent
private Tree tree;
public void onExpandAll()
{
expansionModel = tree.getExpansionModel();
treeFunction.expandAll(expansionModel);
ajaxResponseRenderer.addRender(treeZone);
}
I have tried in my test using
tree = new Tree();
expansionModel = tree.getExpansionModel();
testing.expandAll(expansionModel);
but the expansionModel I get is null. How do I go about testing with #InjectComponent tree?
Any help would be appreciated. Thanks.
Unit testing of pages that contain components can be difficult, it often requires adding special constructors to your components that are ONLY required for testing. This becomes even harder when the components are from an external source (ie tapestry-core).
Have you considered selenium testing instead? I often find that unit testing pages requires lots of effort for little gain.
If you really want to unit test this page, I suggest that you refactor the code to isolate the Tree dependency:
#InjectComponent
private Tree tree;
public void onExpandAll() {
onExpandAll(tree.getExpansionModel());
}
protected void onExpandAll(TreeExpansionModel expansionModel) {
treeFunction.expandAll(expansionModel);
ajaxResponseRenderer.addRender(treeZone);
}
Then you can unit test the second onExpandAll method without needing a Tree instance using DefaultTreeExpansionModel or similar.
Thanks to uklance.
I just need to use DefaultTreeExpansionModel.
Here is the code fragment in my test
expansionModel = new DefaultTreeExpansionModel();
testing.expandAll(expansionModel);
Related
The Versioning API is powerful. However, with the pattern of using it, the code will quickly get messy and hard to read and maintain.
Over the time, product need to move fast to introduce new business/requirements. Is there any advice to use this API wisely.
I would suggest using a Global Version Provider design pattern in Cadence/Temporal workflow if possible.
Key Idea
The versioning API is very powerful to let you change the behavior of the existing workflow executions in a deterministic way(backward compatible). In real world, you may only care about adding the new behavior, and being okay to only introduce this new behavior to newly started workflow executions. In this case, you use a global version provider to unify the versioning for the whole workflow.
The Key idea is that we are versioning the whole workflow (that's why it's called GlobalVersionProvider). Every time adding a new version, we will update the version provider and provide a new version.
Example In Java
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import io.temporal.workflow.Workflow;
import java.util.HashMap;
import java.util.Map;
public class GlobalVersionProvider {
private static final String WORKFLOW_VERSION_CHANGE_ID = "global";
private static final int STARTING_VERSION_USING_GLOBAL_VERSION = 1;
private static final int STARTING_VERSION_DOING_X = 2;
private static final int STARTING_VERSION_DOING_Y = 3;
private static final int MAX_STARTING_VERSION_OF_ALL =
STARTING_VERSION_DOING_Y;
// Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
// We're introducing this map in order to cache our versions on the first call, which should
// always occur at the beginning of an workflow
private static final Map<String, GlobalVersionProvider> RUN_ID_TO_INSTANCE_MAP =
new HashMap<>();
private final int versionOnInstantiation;
private GlobalVersionProvider() {
versionOnInstantiation =
Workflow.getVersion(
WORKFLOW_VERSION_CHANGE_ID,
Workflow.DEFAULT_VERSION,
MAX_STARTING_VERSION_OF_ALL);
}
private int getVersion() {
return versionOnInstantiation;
}
public boolean isAfterVersionOfUsingGlobalVersion() {
return getVersion() >= STARTING_VERSION_USING_GLOBAL_VERSION;
}
public boolean isAfterVersionOfDoingX() {
return getVersion() >= STARTING_VERSION_DOING_X;
}
public boolean isAfterVersionOfDoingY() {
return getVersion() >= STARTING_VERSION_DOING_Y;
}
public static GlobalVersionProvider get() {
String runId = Workflow.getInfo().getRunId();
GlobalVersionProvider instance;
if (RUN_ID_TO_INSTANCE_MAP.containsKey(runId)) {
instance = RUN_ID_TO_INSTANCE_MAP.get(runId);
} else {
instance = new GlobalVersionProvider();
RUN_ID_TO_INSTANCE_MAP.put(runId, instance);
}
return instance;
}
// NOTE: this should be called at the beginning of the workflow method
public static void upsertGlobalVersionSearchAttribute() {
int workflowVersion = get().getVersion();
Workflow.upsertSearchAttributes(
ImmutableMap.of(
WorkflowSearchAttribute.TEMPORAL_WORKFLOW_GLOBAL_VERSION.getValue(),
workflowVersion));
}
// Call this API on each replay tests to clear up the cache
#VisibleForTesting
public static void clearInstances() {
RUN_ID_TO_INSTANCE_MAP.clear();
}
}
Note that because of a bug in Temporal/Cadence Java SDK, Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
We're introducing this map in order to cache our versions on the first call, which should
always occur at the beginning of the workflow execution.
Call clearInstances API on each replay tests to clear up the cache.
Therefor in the workflow code:
public class HelloWorldImpl{
private GlovalVersionProvider globalVersionProvider;
#VisibleForTesting
public HelloWorldImpl(final GlovalVersionProvider versionProvider){
this.globalVersionProvider = versionProvider;
}
public HelloWorldImpl(){
this.globalVersionProvider = GlobalVersionProvider.get();
}
#Override
public void start(final Request request) {
if (globalVersionProvider.isAfterVersionOfUsingGlobalVersion()) {
GlobalVersionProvider.upsertGlobalVersionSearchAttribute();
}
...
...
if (globalVersionProvider.isAfterVersionOfDoingX()) {
// doing X here
...
}
...
if (globalVersionProvider.isAfterVersionOfDoingY()) {
// doing Y here
...
}
...
}
Best practice with the pattern
How to add a new version
For every new version
Add the new constant STARTING_VERSION_XXXX
Add a new API ` public boolean isAfterVersionOfXXX()
Update MAX_STARTING_VERSION_OF_ALL
Apply the new API into workflow code where you want to add the new logic
Maintain the replay test JSON in a pattern of `HelloWorldWorkflowReplaytest-version-x-description.json. Make sure always add a new replay test for every new version you introduce to the workflow. When generating the JSON from a workflow execution, make sure it exercise the new code path – otherwise it won't be able to protect the determinism. If it requires more than one workflow executions to exercise all branches, then make multiple JSON files for replay.
How to remove a old version:
To remove an old code path(version), add a new version to not execute old code path, then later on use Search attribute query like
GlobalVersion>=STARTING_VERSION_DOING_X AND GlobalVersion<STARTING_VERSION_NOT_DOING_X to find out if there is existing workflow execution still running with certain versions.
Instead of waiting for workflows to close, you can terminate or reset workflows
Example of deprecating a code path DoingX:
Therefor in the workflow code:
public class HelloWorldImpl implements Helloworld{
...
#Override
public void start(final Request request) {
...
...
if (globalVersionProvider.isAfterVersionOfDoingX() && !globalVersionProvider.isAfterVersionOfNotDoingX()) {
// doing X here
...
}
}
###TODO Example In Golang
Benefits
Prevent spaghetti code by using native Temporal versioning API everywhere in the workflow code
Provide search attribute to find workflow of particular version. This will fill the gaps that Temporal Java SDK is missing TemporalChangeVersion feature.
Even Cadence Java/Golang SDK has CadenceChangeVersion, this global
version search attribute is much better in query, because it's an
integer instead of a keyword.
Provide a pattern to maintain replay test easily
Provide a way to test different version without this missing feature
Cons
There shouldn't be any cons. Using this pattern doesn't stop you from using the raw versioning API directly in the workflow. You can combine this pattern with others together.
I am completely new to Actionscript and Adobe Flash CS6 and for a little bit of fun I have decided to try and make a little game. I had a few newbie (or noob-y) questions to ask about a general implementation approach.
The documentation I've been reading so far suggests creating a new flash project, and then create a document class so:
package {
import flash.display.MovieClip;
public class MyMainClass extends MovieClip {
public function MyMainClass() {
}
}
}
and I am wondering if I use this MainClass to code the whole game or include actionscript within a scene and have multiple scenes, or some combination of both.
Lets say I had a wanted 5 Levels in my game, would I do something like:
package {
import flash.display.MovieClip;
public class MyMainClass extends MovieClip {
public function MyMainClass() {
StartLevel1();
StartLevel2();
StartLevel3();
StartLevel4();
StartLevel5();
}
public function StartLevel1() {
// Do something
}
public function StartLevel2() {
// Do something
}
public function StartLevel3() {
// Do something
}
public function StartLevel4() {
// Do something
}
public function StartLevel5() {
// Do something
}
}
}
or create 5 scenes with actionscript in each scene?
Can anyone provide me with a bit of a starting point?
Thanks
I don't know of anyone who has anything good to say about scenes.
However, as you intuit, the timeline itself is a wonderful tool for managing the state of your Flash assets over time. If you use it, you also get the hidden advantage that you don't have to download 100% of your file to be able to use it (so you can reduce or even eliminate the need for a preloader by unchecking "Export in frame N" on your library symbols.
Lars has quite rightly pointed out that there are very few developers who understand this technique, and I know of exactly one who can and will help people who are interested in exploring this technique. That person is helping you right now. So if you choose to go that way, keep in mind you are mostly on your own except if I happen to notice your post and respond to it.
I am not in favor of timeline scripts, with a very few exceptions. What I suggest is a "both and" approach, where you use a Document Class to control timeline instances.
Your document Class might look something like this:
public class Game extends MovieClip {
protected var _level:ILevel;//Interface your Level MovieClips will implement
protected var levelController:LevelController = new LevelControler();
protected var currentLevel:int;
protected var maxLevels:int = 5;
public function Game() {
levelController.addEventListener(LevelEventKind.LEVEL_COMPLETE, nextLevel);
levelController.addEventListener(LevelEventKind.LEVEL_FAILED, gameOver);
startLevel(currentLevel);
}
public function startLevel(levelNumber:int):void {
goToLabel('Level' + String(levelNumber));
}
public function get level():ILevel {
return _level;
}
public function set level(value:ILevel):void {
_level = value;
//internally, this should release all listeners to the last
//level object (if any) so you don't get a memory leak
levelController.level = _level;
}
protected function nextLevel(e:Event):void {
if (currentLevel < maxLevels) {
startLevel(++currentLevel);
} else {
//do you won logic here
}
}
protected function gameOver(e:Event):void {
//do bombed out logic here
}
protected function goToLabel(label:String):void {
for each (var frameLabel:FrameLabel in currentLabels) {
if (frameLabel.name==label) {
//if your swf is media-heavy, may want to check that frame
//is loaded if you chose to reduce/eliminate preloader
goToAndStop(label);
return;
}
}
trace('no such label as', label);
}
}
What this gets you is a game where you can change how the different levels look without changing a single line of ActionScript, and you can change how they work by assigning different Base Classes that implement ILevel slightly differently. You can also change your functionality by swapping out different flavors of LevelController, but your Main Document Class (Game in this instance) would be aware of this change (wheras the other changes could be made without altering Game at all).
I am trying to test a repository that uses DbContext
The issue I am running into is that DbCOntext wants to return DbSet for certain types
I can't even mock IDbSet as it has a PRIVATE CTOR?!?!?!
How is everyone getting over this?
You need to make use of an adapter or wrapper. The DbContext is third party code. Your code should hide this behind an abstraction, remember this is just an implementation detail about how the internals of your system works. You should be free to change this at any time.
public class ThatsHardToTest
{
// Private constructors, slow start up time etc...
public int AlwaysReturnOneExceptInSuperRareScnearios()
{
// Complex logic.
return 1;
}
}
Now if we want to test our code when the above method goes wrong, e.g database system is offline. I don't want the test to hit the database, so we need an adapter around this third party code. I would either make an interface or base class.
public class MyTestAdapter : IExampleAdapter
{
public int ReturnWhateverIWant()
{
return -1;
}
}
I would use MyTestAdapter for unit testing my code, as I can control what it does. For the production code you simply substitute this with an adapter which delegates to the real production system. For example:
public class MyRealAdapter : IExampleAdapter
{
public int ReturnWhateverIWant()
{
return new ThatsHardToTest().AlwaysReturnOneExceptInSuperRareScnearios();
}
}
This is more a solution / work around than an actual question. I'm posting it here since I couldn't find this solution on stack overflow or indeed after a lot of Googling.
The Problem:
I have an MVC 3 webapp using EF 4 code first that I want to write unit tests for. I'm also using NCrunch to run the unit tests on the fly as I code, so I'd like to avoid backing onto an actual database here.
Other Solutions:
IDataContext
I've found this the most accepted way to create an in memory datacontext. It effectively involves writing an interface IMyDataContext for your MyDataContext and then using the interface in all your controllers. An example of doing this is here.
This is the route I went with initially and I even went as far as writing a T4 template to extract IMyDataContext from MyDataContext since I don't like having to maintain duplicate dependent code.
However I quickly discovered that some Linq statements fail in production when using IMyDataContext instead of MyDataContext. Specifically queries like this throw a NotSupportedException
var siteList = from iSite in MyDataContext.Sites
let iMaxPageImpression = (from iPage in MyDataContext.Pages where iSite.SiteId == iPage.SiteId select iPage.AvgMonthlyImpressions).Max()
select new { Site = iSite, MaxImpressions = iMaxPageImpression };
My Solution
This was actually quite simple. I simply created a MyInMemoryDataContext subclass to MyDataContext and overrode all the IDbSet<..> properties as below:
public class InMemoryDataContext : MyDataContext, IObjectContextAdapter
{
/// <summary>Whether SaveChanges() was called on the DataContext</summary>
public bool SaveChangesWasCalled { get; private set; }
public InMemoryDataContext()
{
InitializeDataContextProperties();
SaveChangesWasCalled = false;
}
/// <summary>
/// Initialize all MyDataContext properties with appropriate container types
/// </summary>
private void InitializeDataContextProperties()
{
Type myType = GetType().BaseType; // We have to do this since private Property.Set methods are not accessible through GetType()
// ** Initialize all IDbSet<T> properties with CollectionDbSet<T> instances
var DbSets = myType.GetProperties().Where(x => x.PropertyType.IsGenericType && x.PropertyType.GetGenericTypeDefinition() == typeof(IDbSet<>)).ToList();
foreach (var iDbSetProperty in DbSets)
{
var concreteCollectionType = typeof(CollectionDbSet<>).MakeGenericType(iDbSetProperty.PropertyType.GetGenericArguments());
var collectionInstance = Activator.CreateInstance(concreteCollectionType);
iDbSetProperty.SetValue(this, collectionInstance,null);
}
}
ObjectContext IObjectContextAdapter.ObjectContext
{
get { return null; }
}
public override int SaveChanges()
{
SaveChangesWasCalled = true;
return -1;
}
}
In this case my CollectionDbSet<> is a slightly modified version of FakeDbSet<> here (which simply implements IDbSet with an underlying ObservableCollection and ObservableCollection.AsQueryable()).
This solution works nicely with all my unit tests and specifically with NCrunch running these tests on the fly.
Full Integration Tests
These Unit tests test all the business logic but one major downside is that none of your LINQ statements are guaranteed to work with your actual MyDataContext. This is because testing against an in memory data context means you're replacing the Linq-To-Entity provider but a Linq-To-Objects provider (as pointed out very well in the answer to this SO question).
To fix this I use Ninject within my unit tests and setup InMemoryDataContext to bind instead of MyDataContext within my unit tests. You can then use Ninject to bind to an actual MyDataContext when running the integration tests (via a setting in the app.config).
if(Global.RunIntegrationTest)
DependencyInjector.Bind<MyDataContext>().To<MyDataContext>().InSingletonScope();
else
DependencyInjector.Bind<MyDataContext>().To<InMemoryDataContext>().InSingletonScope();
Let me know if you have any feedback on this however, there are always improvements to be made.
As per my comment in the question, this was more to help others searching for this problem on SO. But as pointed out in the comments underneath the question there are quite a few other design approaches that would fix this problem.
I would like to be able to run tests on my fake repository (that uses a list)
and my real repository (that uses a database) to make sure that both my mocked up version works as expected and my actual production repository works as expected. I thought the easiest way would be to use TestCase
private readonly StandardKernel _kernel = new StandardKernel();
private readonly IPersonRepository fakePersonRepository;
private readonly IPersonRepository realPersonRepository;
[Inject]
public PersonRepositoryTests()
{
realPersonRepository = _kernel.Get<IPersonRepository>();
_kernel = new StandardKernel(new TestModule());
fakePersonRepository = _kernel.Get<IPersonRepository>();
}
[TestCase(fakePersonRepository)]
[TestCase(realPersonRepository)]
public void CheckRepositoryIsEmptyOnStart(IPersonRepository personRepository)
{
if (personRepository == null)
{
throw new NullReferenceException("Person Repostory never Injected : is Null");
}
var records = personRepository.GetAllPeople();
Assert.AreEqual(0, records.Count());
}
but it asks for a constant expression.
Attributes are a compile-time decoration for an attribute, so anything that you put in a TestCase attribute has to be a constant that the compiler can resolve.
You can try something like this (untested):
[TestCase(typeof(FakePersonRespository))]
[TestCase(typeof(PersonRespository))]
public void CheckRepositoryIsEmptyOnStart(Type personRepoType)
{
// do some reflection based Activator.CreateInstance() stuff here
// to instantiate the incoming type
}
However, this gets a bit ugly because I imagine that your two different implementation might have different constructor arguments. Plus, you really don't want all that dynamic type instantiation code cluttering the test.
A possible solution might be something like this:
[TestCase("FakePersonRepository")]
[TestCase("TestPersonRepository")]
public void CheckRepositoryIsEmptyOnStart(string repoType)
{
// Write a helper class that accepts a string and returns a properly
// instantiated repo instance.
var repo = PersonRepoTestFactory.Create(repoType);
// your test here
}
Bottom line is, the test case attribute has to take a constant expression. But you can achieve the desired result by shoving the instantiation code into a factory.
You might look at the TestCaseSource attribute, though that may fail with the same error. Otherwise, you may have to settle for two separate tests, which both call a third method to handle all of the common test logic.