After the implementation of some of the rules for my project, i did a "ScoreConsistencyCheck" to ensure myself that the rules were implemented correctly.
"ScoreConsistencyCheck" meaning implementing my own Java method that would be called after i either terminate the solving early, or it terminated via configuration, that would output the expected score. The parameter of this method is a Solution instance, based on what the state of the solution is the expected score is calculated, and then it is compared to the score that comes from the "score" variable from the Solution instance.
When i use FULL_ASSERT, it doesn't throw a ScoreCorruption exception, but when i try it this way, i sometimes get a score difference at a particular step either in the construction heuristic or local search. My guess is because OptaPlanner does not know what the expected score based on the solution is all it cares about in FULL_ASSERT is the step score to be the same as the score that is recalculated after the undo move is done.
So because the "ScoreConsistencyCheck" is only called after the solving has ended, i can't really deduce what case is causing a problem(if it is causing any) , because the move and step in which this occurred is unknown.
Because of this i'm looking for a way that will show me my expected score(from the "ScoreConsistencyCheck") after each move, so i can compare it with the OptaPlanner one, and find the cases that are missed in the calculations. To do this, i need a way to get the working solution after each move.
After some searching i wasn't able to find much. I did however, find that in Optaplanner 7.0.0 Beta there is a ScoreVerifier (using Optaplanner 6.4.0) but the case with that is:
I don't know if that will accomplish what i'm looking for, as there is little documentation about it.
I'm having trouble implementing it.
My questions here are:
How to get the workingSolution after each move, and use it for the check?
Is there a feature in Optaplanner 6.4.0 that will allow me to do this?
If there isn't a feature, is there a possible workaround?
Is there a better way to check the score consistency of the Rules?
Yes, <assertionScoreDirectorFactory> (see docs). Use that in combination with FULL_ASSERT and you'll have a more isolated view on where score corruption first started.
Related
I've run into a frustrating feature of KVO: all notifications are funneled through a single method (observeValueForKeyPath:....), requiring a bunch of IF statements if the object is observing numerous properties.
The ideal solution would be to pass a method as an argument to the method that establishes the observing in the first place, but it seems this isn't possible. Does a solution exist to this problem? I initially considered using the keyPath argument (addObserver:forKeyPath:options:context:) to call a method via NSSelectorFromString, but then I came across the post KVO Dispatcher pattern with Method as context and the article it linked to which offers a different solution in order to pass arguments along as well (although I haven't gotten that working yet).
I know a lot of people have come up against this issue. Has a standard way of handling it emerged?
OP asks:
Has a standard way of handling it emerged?
No, not really. There are a lot of different approaches out there. Here are some:
https://github.com/sleroux/KVO-Blocks
http://pandamonia.github.io/BlocksKit
http://www.mikeash.com/pyblog/friday-qa-2012-03-02-key-value-observing-done-right-take-2.html
https://github.com/ReactiveCocoa/ReactiveCocoa
http://blog.andymatuschak.org/post/156229939/kvo-blocks-block-callbacks-for-cocoa-observers
Seriously, there are a ton of these... Google "KVO blocks"
I can't say that any of the options I've seen seem prevalent enough to earn the title "standard way". I suspect most folks who feel motivated to conquer this issue just pick one and go with it, or write their own -- it's not as if adapting KVO to use block based callbacks is rocket science. The Method-based approach you link to doesn't seem like a step forward for simplicity. I get that you're trying to take the uncertainty of the string-based-key-path <-> method conversion out of the equation, but that kind of falls down because not all observable keys/keyPaths are methods. (If nothing else, you can observe arbitrary keys on NSMutableDictionaries and get notifications.)
It sure would be nice if Apple would release a new blocks-based KVO API, but I'm not holding my breath. But in the meantime, like I said, just pick one you like and use it or write your own and use that.
When a software is developed,various types of testing is done - unit,integration,functional,manual.In my current project(winforms with sql server),which has legacy code(no tests),we do have lot of bugs.
We are trying to remove them using combination of manual + tests(mostly integration)
But,still some bugs can escape.
For example(a hypothetical scenario) - if a customer has purchased some worth of goods for last 6 months,he should be given some discount on purchases he makes once 6 months have lapsed.His status should be updated to privilege.
But,for some reason(bug in the code) the system is not doing so.How should we tackle such scenarios? Should we have a script running on the database which looks for scenarios such as described? Another extension of the scenario could be,the customer must be send a gift once he is privileged,but system is missing to do so.
Thoughts?
"Should we have a script running on the database which looks for scenarios such as described?"
Do you mean "put a script in the database to correct the problem", then no.
NO. Never. Under no circumstances. Working around a bug by adding peculiar special-case logic is really a very bad idea.
When that peculiar special-case logic has it's own bugs, you've added buggy code to try and correct buggy code. A net loss.
When you try to enhance the system, you have this peculiar special-case logic that doesn't make any sense.
a. If you're lucky, you fixed the bug it was supposed to work around, and it will be redundant. What now? Which copy to remove?
b. Otherwise, it will contradict other code. What now? Which is right?
If you mean "put a script in the database to help find and debug the problem", then yes. For a short time, use every tool at your disposal to find and fix bugs. Once found and fixed, this script is then useless and must be deleted.
If you mean "write a script in the database to test the application", then yes. That's what unit test scripts are for. Use them.
It is far better to create unit tests than it is to create scripts that you put in the database. Unit tests are the best approach.
You should have an automated test-suite in place. This test-suite will implement all the scenarios that the specification requires. Since one cannot wait for six months to test that the discounting works, the actual implementation is replaced by a mock implementation (the example is in java but the same principles apply in other languages), that for example "simulates" that 6 months have passed. One can use assertions to automate the tests.
Once you have the whole test-suite ready, if all tests pass after (just as before) a refactoring/changing of the code, one can be sure that no feature has broken due to the refactoring.
How to find out about a particular function or class in which version of Matlab/toolbox it was first introduced? I know I can look through all Release Notes, or Google can help sometime, but is there any better way?
Ditto Jonas ... there is no version history for specific functions.
One other thing you can do (if you didn't know this already) is, in your current version of Matlab, to check the value of exist('func'), where func is the name of the MATLAB function. The value this returns for matlab functions is 2, and for built-in functions it's 5.
If you're going for compatibility in your scripts, I would put a condition to check for that function existence before you use it. Otherwise, if you have multiple versions of MATLAB you can run a script to go through all of them or just do it by hand.
There isn't.
Except, if the place you work at has an active service contract with The MathWorks, you can send a service request and have them do the searching for you (be prepared to argue a bit if they just tell you to google the answer yorself). I do that from time to time in the hope that they'll eventually update the documentation.
I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.
How would you define "unwanted code"?
Edit:
IMHO, Any code member with 0 active calling members (checked recursively) is unwanted code. (functions, methods, properties, variables are members)
Here's my definition of unwanted code:
A code that does not execute is a dead weight. (Unless it's a [malicious] payload for your actual code, but that's another story :-))
A code that repeats multiple times is increasing the cost of the product.
A code that cannot be regression tested is increasing the cost of the product as well.
You can either remove such code or refactor it, but you don't want to keep it as it is around.
0 active calls and no possibility of use in near future. And I prefer to never comment out anything in case I need for it later since I use SVN (source control).
Like you said in the other thread, code that is not used anywhere at all is pretty much unwanted. As for how to find it I'd suggest FindBugs or CheckStyle if you were using Java, for example, since these tools check to see if a function is used anywhere and marks it as non-used if it isn't. Very nice for getting rid of unnecessary weight.
Well after shortly thinking about it I came up with these three points:
it can be code that should be refactored
it can be code that is not called any more (leftovers from earlier versions)
it can be code that does not apply to your style-guide and way-of-coding
I bet there is a lot more but, that's how I'd define unwanted code.
In java i'd mark the method or class with #Deprecated.
Any PRIVATE code member with no active calling members (checked recursively). Otherwise you do not know if your code is not used out of your scope analysis.
Some things are already posted but here's another:
Functions that almost do the same thing. (only a small variable change and therefore the whole functions is copy pasted and that variable is changed)
Usually I tell my compiler to be as annoyingly noisy as possible, that picks 60% of stuff that I need to examine. Unused functions that are months old (after checking with the VCS) usually get ousted, unless their author tells me when they'll actually be used. Stuff missing prototypes is also instantly suspect.
I think trying to implement automated house cleaning is like trying to make a USB device that guarantees that you 'safely' play Russian Roulette.
The hardest part to check are components added to the build system, few people notice those and unused kludges are left to gather moss.
Beyond that, I typically WANT the code, I just want its author to refactor it a bit and make their style the same as the rest of the project.
Another helpful tool is doxygen, which does help you (visually) see relations in the source tree.. however, if its set at not extracting static symbols / objects, its not going to be very thorough.