Railstutorial #5 Section 3.4 Need of full_title helper. Help needed to understand the benefits - railstutorial.org

in Chapter 5 Section 3.4 of the rails-3-2.railstutorial it says
Of course, this is essentially a duplicate of the helper in Listing 4.2, but having two independent methods allows us to catch any typos in the base title. This is dubious design, though, and a better (slightly more advanced) approach, which tests the original full_title helper directly, appears in the exercises (Section 5.6).
I did as it said, but i do not understand the mentioned benefit
...having two independent methods allows us to catch any typos in the base title.
The "original" method in application_helper.rb and the test method in spec/support/utilities.rb act exactly the same. So from my point of view it´s a disvantage - there are two places to missspell.
I am new to ruby & rails and having a hard time cause the tutorial covers alot of new stuff, so please bear with me. I would be glad if someone could take some time to help me to understand.
Kind regards,
Stephen

RSpec is a tool for testing.
So it's kind of check list that it "should" be.
If there is a typo one of two methods, the RSpec test will fail, so that you can notice there is something wrong there.
I hope this helps.

Related

What does _xjal stands for in anylogic generated code?

I have inherited an anylogic model at work. It's my first contact with anylogic. There is hardly documentation, so I try to dissect the generated code to understand, what is going on.
There is one thing that appears all the time: _xjal
It appears in variable, member and method names - like _result_xjal, _value_xjal etc...
Usually it is easier to understand code when names make sense. That's why I hope, to easier understand the code, once this strange thing, that google does not know of, is deciphered.
What does it stand for?
"xj" stands for XJ-Tec (the former name of the company that develops AnyLogic).
"al" stands for AnyLogic.
"xjal" flags that this is code developed by the AnyLogic developers, i.e. it is neither standard Java code nor code from your model.
Nothing more and nothing less :)

Key-Value-Observation -- Looking for a more elegant solution to respond to value changes

I've run into a frustrating feature of KVO: all notifications are funneled through a single method (observeValueForKeyPath:....), requiring a bunch of IF statements if the object is observing numerous properties.
The ideal solution would be to pass a method as an argument to the method that establishes the observing in the first place, but it seems this isn't possible. Does a solution exist to this problem? I initially considered using the keyPath argument (addObserver:forKeyPath:options:context:) to call a method via NSSelectorFromString, but then I came across the post KVO Dispatcher pattern with Method as context and the article it linked to which offers a different solution in order to pass arguments along as well (although I haven't gotten that working yet).
I know a lot of people have come up against this issue. Has a standard way of handling it emerged?
OP asks:
Has a standard way of handling it emerged?
No, not really. There are a lot of different approaches out there. Here are some:
https://github.com/sleroux/KVO-Blocks
http://pandamonia.github.io/BlocksKit
http://www.mikeash.com/pyblog/friday-qa-2012-03-02-key-value-observing-done-right-take-2.html
https://github.com/ReactiveCocoa/ReactiveCocoa
http://blog.andymatuschak.org/post/156229939/kvo-blocks-block-callbacks-for-cocoa-observers
Seriously, there are a ton of these... Google "KVO blocks"
I can't say that any of the options I've seen seem prevalent enough to earn the title "standard way". I suspect most folks who feel motivated to conquer this issue just pick one and go with it, or write their own -- it's not as if adapting KVO to use block based callbacks is rocket science. The Method-based approach you link to doesn't seem like a step forward for simplicity. I get that you're trying to take the uncertainty of the string-based-key-path <-> method conversion out of the equation, but that kind of falls down because not all observable keys/keyPaths are methods. (If nothing else, you can observe arbitrary keys on NSMutableDictionaries and get notifications.)
It sure would be nice if Apple would release a new blocks-based KVO API, but I'm not holding my breath. But in the meantime, like I said, just pick one you like and use it or write your own and use that.

How should I restructure a large Perl script?

I have a more or less large Perl script of ~ 1000 lines. The script accepts a few arguments and it runs straight forward. No modules, no functions. The script could be divided into three parts, initialization part, arguments parsing part and work part, but I don't know how to do that. Everything must be kept in a single file. Please, can anyone give me instructions/advice how to structure my Perl script?
Thanks.
You ask for advice on how to refactor your script, but you don't appear to understand why to refactor it. Without the why, the how isn't going to do you much good. And with the why, the how may fall out quite naturally.
If your script is working perfectly and needs no modification and all you'll ever do with it is run it, then you probably don't have a reason to refactor it - and I say that from the perspective of despising long routines. But...
If something's wrong with it
If you are trying to find a bug in your 1,000-line program, you have some hard work ahead of you. The problem could be anywhere. Break it up into smaller pieces so that you can verify the input and output at different stages - ideally, write tests for the smaller pieces. Fine-grained unit tests will tell you what isn't working, the nature of the error, and where the error exists.
If you need to modify it
If you need to change the script to - say - accommodate a new graphics format, or take advantage of multiple processors, or record its activities to a log - you will find it easier to extend if the program elements that need revision or extension are better isolated.
If you're trying to explain it to someone else, or show it off
You will find it much easier to convey the ideas in your script to another developer if the ideas are broken out into discrete methods.
So, there are some reasons why you might choose to refactor. If any of them apply, refactor accordingly; the how will drop out naturally. Extract Method may be your best friend.
If you can see logical parts of your script, you should definitely abstract them into functions. Having a single script of over 1000 lines, and not breaking it up into whatever abstraction units your language provides (functions, classes, etc.) is a very bad idea. Maintaining your script, i. e. adding features and fixing bugs, will be a nightmare.
I strongly suggest you read the book Clean Code by Robert C. Martin. It uses Java for examples, but the ideas are applicable to any language. The one that is most relevant here is "Make your functions small. Then make them smaller."
1000 lines and no functions? why not? this is a vague question.
You could break each section into a separate function, and then have a function that runs through each of these functions in the correct order called 'run()' or something similar. This would allow you to break up the program into more mangeable chunks.
p.s. man I think I used the word function too many times in this answer!
Have you refactored at all? At 1000 lines I'd suspect to see some code that could be broken down into functions internal to the script.
Well, if you have three separate sections that's the logical choice.
You could make each one into a function and then have a simple linear control at the top:
my $var1, $var2, $var3;
$var1 = init();
$var2 = parseInput();
$doWork();
sub init() {
some code here
}
sub parseInput() {
some code here
}
sub doWork() {
some code here
}
The big issue is you're going to be using globals a lot. I'd build them into a structure or two. I would also expect to see the big three broken down into functions themselves. Back in the 80s when the big thing I was learning was structured programming (the best design here I think) the rule of thumb was a function should fit on roughly one screen or less.
Most people would typically answer something like "a subroutine should do one thing" and "a subroutine should only take up one page in your editor". You can try to keep these things in mind when you refactor your code.
Try to identify parts of your code that can be split off into logical sections. You've started this process by spotting 'initialization', 'argument parsing', and 'work'. See if there are some sub-sections within that that can be pruned off into other subroutines.
Also, why do you not use any modules? One that springs to mind is Getopt::Long, which is a core module, so you won't have to install it manually. It will handle all of your argument parsing, and by using it you will probably avoid bugs and could shorten your code to make it more maintainable. By using standard modules like this, you not only (hopefully!) reduce the number of bugs in your code, you make it easier for other Perl programmers to understand.
You could look at search.cpan.org, maybe some Perl module suits your needs. For example there is a CGI::Application

Why use a post compiler?

I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.

Are there any tools to visualize template/class methods and their usage?

I have taken over a large code base and would like to get an overview how and where certain classes and their methods are used.
Is there any good tool that can somehow visualize the dependencies and draw a nice call tree or something similar?
The code is in C++ in Visual Studio if that helps narrow down any selection.
Here are a few options:
CodeDrawer
CC-RIDER
Doxygen
The last one, doxygen, is more of an automatic documentation tool, but it is capable of generating dependency graphs and inheritance diagrams. It's also licensed under the GPL, unlike the first two which are not free.
When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.
David, thanks for the suggestions. I spent the weekend trialing the programs.
Doxygen seems to be the most comprehensive of the 3, but it still leaves some things to be desired in regard to callers of methods.
All 3 seem to have problems with C++ templates to varying degrees. CC-Rider simply crashed in the middle of the analysis and CodeDrawer does not show many of the relationships. Doxygen worked pretty well, but it too did not find and show all relations and instead overwhelmed me with lots of macro references until I filtered them out.
So, maybe I should clarify "large codebase" a bit for eventual other suggestions: >100k lines of code overall spread out over more than 100 template files plus several actual class files pulling it all together.
Any other tools out there, that might be up to the task and could do better (more thoroughly)? Oh and specifically: anything that understands IDL and COM interfaces?
When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.
I did that of course, but like I mentioned, doxygen does not consider interfaces between objects as they are defined in the IDL. It "only" shows direct C++ calls.
Don't get me wrong, it is already amazing what it does, but it is still not complete from my high level view trying to get a good understanding of how everything fits together.
In Java I would start with JDepend. In .NET, with NDepend. Don't know about C++.