IcedCoffeeScript or jQuery deferred - coffeescript

Recently while working on a Backbone.JS/jQuery/CoffeeScript project, I found myself in a mess of callback and timing issues. I needed to wait for something to complete before proceeding and found myself in a mess of nested callbacks ... which is confusing and hard to debug. Then I found 2 possible solutions jQuery deferred or IcedCoffeeScript
IcedCoffeeScript looks really easy, simply adding await & defer. However, I wonder if its there to stay? Only 2 questions here on StackOverflow? Not much talk about it compared to CoffeeScript
Another thing is whats the difference between the 2 methods, they seem to do mostly the same thing? Except in IcedCoffeeScript, it looks more like procedural code, and in jQuery deferred, it doesn't solve my mess of callbacks as much

These are very different technologies:
IcedCoffeeScript is a precompiler that extends CoffeeScript with the await and defer keywords that transform code for you so that you can write code in a synchronous style. In the generated JavaScript, await and defer produce nested functions.
jQuery Deferred (aka Promises) are a way of side-stepping callbacks altogether: Instead of taking a callback, an async function can return a Promise. You then attach callbacks to the Promise. It's a simple, but powerful technique. I devote a chapter to it in my book, Async JavaScript.
Each of these technologies works best with a certain kind of API. await and defer expect a function to take a single callback as its last argument. Promises work best when you have lots of other Promises in your app.
There's no magic bullet for dealing with async behavior in JavaScript. You need to understand callbacks, Promises, and PubSub (aka EventEmitters) and choose the best tool for each job. Even if you use IcedCoffeeScript (which is cool), there are still times when Promises will save you a huge amount of work.
I hope that helps. Check out my book, Async JavaScript, for much more information.

I think IcedCoffeeScript is here to stay.
I plan on supporting it indefinitely, and am pretty regular in patching it against the mainline. I use it on almost all of my personal projects, and the site Combosaurus.com, an OkCupid Labs project just about to hit general release, is written in IcedCoffeeScript.

I wrote a blog post about the differences between standard event callbacks, pub sub and deferreds, which might help you out:
Event Emitter, Pub Sub or Deferred Promises … which should you choose?
The intro reads:
The obvious answer to the question “should you choose an Event
Emitter, Pub Sub or Deferred / Promises” is that it depends on what
you are doing.
In this post I will explore a little about how each pattern works with
(very) basic implementations, and then I’ll look at the reasons why
you might choose one over another.
And the summary is:
Event Emitter is a real classic, and allows for good practices and
control over things that happen; Pub Sub is more flexible for
cross-component events; Deferred and Promises give a powerful way to
handle callbacks.
Applying the summary to your problem - I would suggest that Deferred's and Promises would probably help you out a lot.
I don't know about you, but I found getting my head around how I would implement Deferred really helped me understand the intricacies of using it. I've included some sample code in the post too, which being (extremely) simple, might help you out when looking at someone else's more robust attempt.

Oh, I'd definitely rely on IcedCoffeeScript. If you like and use CoffeeScript syntax, you'll find out it simply "blends" naturally with it...
Regarding the project future, I had the same dilemma a while ago, but it looks like Maxwell Krohn is actively maintaining it and there's a growing community and awareness around it (ok, maybe not here on stackoverflow yet).
I've started using it last year and now I couldn't really imagine coding "real-life" apps without await and defer.
You can find a brief "protip" on elegant asynchronous control flow with ICS here and I also wrote a 5-parts ironically-titled article on using Node.js for generic web projects here mentioning ICS.

Related

ScalaJS: What's the state of the art for cross-platform dates?

I'm using ScalaJS with Play. Many of the models I'd like to use on both JS and JVM platforms involve dates and times. Given the lack of a cross-platform date/time library, how are people approaching this?
Things I know about:
scalajs-java-time project (https://github.com/scala-js/scala-js-java-time) to port JDK8's java.time api to Scala.js. Unfortunately, it's far from complete and judging by the commit logs, seems to have stalled.
https://github.com/mdedetrich/soda-time is a port of JodaTime to Scala/Scala.js. But it's not ready for production use.
An old post at https://groups.google.com/forum/#!topic/scala-js/6JoJ7x-VxLA suggests storing milliseconds in shared code and then doing implicit conversions on each platform to either js.Date or JodaTime. But we really need a common interface, which this doesn't give.
Li Haoyi's excellent "Hands-on Scala.js" has a simple cross-platform library (http://www.lihaoyi.com/hands-on-scala-js/#ASimpleCross-BuiltLibrary) that could, in theory, be extended to come up with an API in /shared that delegates to JodaTime on the jvm and Momento on js -- but that sounds like a lot of work.
(added later) https://github.com/soc/scala-java-time is based on an implementation of java-time that was contributed to OpenJDK. The README claims that most stuff is working. Right now, this looks like the most promising approach for my needs.
Any advice from those who have gone before me? Right now the fourth options seems like my best bet (with the API limited to stuff I actually use). I'm hoping for something better.
I was in the same boat as you, and the best solution I came up with was cquiroz's scala-java-time library. From reading the comments to your question above, it appears you landed at the same place eventually!
I came here from a google search, and given how much better this solution is than the alternatives you mentions above, let's consider marking this question as resolved for future visitors.

Why doesn't Catalyst support routes dispatching

I'm wondering why Catalyst does not have a routes-based dispatching mechanism. Does anyone have insights on that?
I know most of it would be achievable with chained methods, but I sometimes struggle with them (my bad, I know). I used routes in Mojolicious and they feel somewhat more comfortable.
I'm tempted to roll my own based on Path::Router, but before I embark on such a project I thought I'd ask some wise people.
And, while I'm at it, is there any good source for understanding dispatching in Catalyst, beside reading the code?
Thanks-
It is possible to construct routes as the dispatching mechanism in Catalyst. This SO question and answer discusses it, and the meaty part can be found in this Advent Calendar article.
That first link will also refer you to The Definitive Guide to Catalyst, a book well worth getting hold of. Dispatching is discussed at length therein.

What does Crosscutting Requirements/Concerns mean in Programming?

These I come across this term a lot "crosscutting requirements/concerns" in programming world.
Although I think I have an idea what it means still I do not have a clear idea. I hear it a lot in web service and SOA in general.
Can this be explained using a hello world example?
It tends to mean "stuff that you want to do in lots of places, which doesn't have an awful lot to do with the real meat of that piece of code".
Common examples are:
Transaction handling
Security
Logging
Error handling
I find it's usually mentioned in respect to Aspect-Oriented Programming (AOP) which usually attempts to handle things like this declaratively, e.g. with attributes/annotations. As a gross simplification, it's a case of applying boiler-plate code (e.g. to verify the identity/authority of the user in the current context, or to log entry/exit of the method) automatically without making the code itself messy.
The standard "hello world" example for crosscutting is logging: You have an error in your production system and you have no clue what is going on. To solve it, you really need to see which functions in your code are called and what parameters they get and what they return.
This is a simple task that can be fully automated: Locate all functions (or a subset using a filter of some kind) and add a logging call to them which prints the name and the parameters. Since the code contains all the information you need to complete this task, what you really want is a tool that does it for you and which does it in a single place (instead of having you edit thousands of source files adding log statements everywhere).
I recommend you look at a framework like Postsharp and try out this example from the postsharp site. If you know java a lok into AspectJ is worth a look. But first you may want to read the link posted by Jon Skeet :)

Do Poor Code Samples Turn You Away From Libraries?

I've been evaluating a framework that on paper looks great. The problem is that the sample code is incomplete and of poor quality. The supplied reference implementations are for the most part not meant to be used (so they can be considered as sample code as well) and have only succeeded at confusing me.
I know that it's common for things to look better on paper, but my experience with the sample code is turning me away from further investigation.
Do you let poor code samples change your judgment of frameworks/libraries? So far my experience has been similar to the "resume effect": if someone doesn't put the effort into spell checking their resume, they probably won't get the job...
For me, it does. I tend to want to avoid libraries where the code samples are incomplete. If the library is open source, I will overlook it, since I can directly look at the code and see if the library's internals are reasonable, and I know that, if there is a problem someday, I could (if I had to) fix it.
If the library is commercial, and their samples and/or documentation is poor, I look elsewhere. I just see it as risk management - poor samples make me fear the quality of the library in general.
No matter how good something is on paper or in theory, it can still be crap when programmed.
I think this is a valid reason to turn away from and evaluate other libraries. As a potential user of a library a lack of documentation and/or bad code samples gives the impression that the library is not yet mature enough for use by third parties. In time it may well gain the missing pieces but until then I think its reasonable to look elsewhere.
I was recently evaluating the multitude of blogging applications that people have uploaded to github.com I quickly skipped ones that no documentation as they obviously weren't ready for others to use. The ones that remained at the end had a good README with info on how to get the app up and running as well as an online example of the code running.
Poor code samples combined with poor documentation will make me turn away from a library unless there is a compelling reason to use it. However, a library that has either good code samples or good documentation is usually worth using. (Assuming that the library itself otherwise meets my needs.)
If I can't find good examples (and/or documentation) illustrating how to use the library, I'm definitely less likely to use it - just as a practical matter, it'll be harder for me to figure out how. But I don't care what the code that implements the library itself looks like. I don't think I'd choose one library/framework over another just because the developers of the one have shown an ability to write cleaner code (which is what I understand the "resume effect" to mean).
Lack of documentation and examples makes me a whole lot less likely to use that particular library. It's not worth my time testing and trying to figure out how a black box works if there are alternate solutions to the problem out there.
Yes, definitely. Every library should come with a simple example using program and a CLI interface (for very simple libraries with <3 methods and <10 hooks, one example should suffice).
And why does your framework "look great" if it's so hard to use that even the original coders make mistakes using it?
It certainly matters to me. Evidence of sloppy/incomplete coding and poor communication decreases my confidence that the actual implementation code is stable and robust.
Myself yes, but there must be people out there who aren't turned off by this otherwise there are plenty of open source projects that would have died a long long time ago.

Getting your head around other people's code

I'm occasionally unfortunate enough to have to make alterations to very old, poorly not documented and poorly not designed code.
It often takes a long time to make a simple change because there is not much structure to the existing code and I really have to read a lot of code before I have a feel for where things would be.
What I think would help a lot in cases like this is a tool that would allow one to visualise an overview of the code, and then maybe even drill down for more detail. I suspect such a tool would be very hard to get right, given that is trying to find structure where there is little or none.
I guess this is not really a question, but rather a musing. I should make it into a question - What do others do to assist in getting their head around other peoples code, the good and the bad?
Hmm, this is a hard one, so much to say so little time ...
1) If you can run the code it makes life soooo much easier, breakpoints (especially conditional) break points are you friend.
2) A purists' approach would be to write a few unit tests, for known functionality, then refactor to improve code and understanding, then re-test. If things break, then create more unit tests - repeat until bored/old/moved to new project
3) ReSharper is good at showing where things are being used, what's calling a method for instance, it's static but a good start, and it helps with refactoring.
4) Many .net events are coded as public, and events can be a pain to debug at the best of times. Recode them to be private and use a property with add/remove. You can then use break point to see what is listening on an event.
BTW - I'm playing in the .Net space, and would love a tool to help do this kind of stuff, like Joel does anyone out there know of a good dynamic code reviewing tool?
I have been asked to take ownership of some NASTY code in the past - both work and "play".
Most of the amateurs I took over code for had just sort of evolved the code to do what they needed over several iterations. It was always a giant incestuous mess of library A calling B, calling back into A, calling C, calling B, etc. A lot of the time they'd use threads and not a critical section was to be seen.
I found the best/only way to get a handle on the code was start at the OS entry point [main()] and build my own call stack diagram showing the call tree. You don't really need to build a full tree at the outset. Just trace through the section(s) you're working on at each stage and you'll get a good enough handle on things to be able to run with it.
To top it all off, use the biggest slice of dead tree you can find and a pen. Laying it all out in front of you so you don't have to jump back and forward on screens or pages makes life so much simpler.
EDIT: There's a lot of talk about coding standards... they will just make poor code look consistent with good code (and usually be harder to spot). Coding standards don't always make maintaining code easier.
I do this on a regular basis. And have developed some tools and tricks.
Try to get a general overview (object diagram or other).
Document your findings.
Test your assumptions (especially for vague code).
The problem with this is that on most companies you are appreciated by result. That's why some programmers write poor code fast and move on to a different project. So you are left with the garbage, and your boss compares your sluggish progress with the quick and dirtu guy. (Luckily my current employer is different).
I generally use UML sequence diagrams of various key ways that the component is used. I don't know of any tools that can generate them automatically, but many UML tools such as BoUML and EA Sparx can create classes/operations from source code which saves some typing.
The definitive text on this situation is Michael Feathers' Working Effectively with Legacy Code. As S. Lott says get some unit tests in to establish behaviour of the lagacy code. Once you have those in you can begin to refactor. There seems to be a sample chapter available on the Object Mentor website.
I strongly recommend BOUML. It's a free UML modelling tool, which:
is extremely fast (fastest UML tool ever created, check out benchmarks),
has rock solid C++ import support,
has great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
is full featured, intensively developed (look at development history, it's hard to believe that so fast progress is possible).
So: import your code into BOUML and view it there, or export to SVG and view it in Firefox.
See Unit Testing Legacy ASP.NET Webforms Applications for advice on getting a grip on legacy apps via unit testing.
There are many similar questions and answers. Here's the search https://stackoverflow.com/search?q=unit+test+legacy
The point is that getting your head around legacy is probably easiest if you are writing unit tests for that legacy.
I haven't had great luck with tools to automate the review of poorly documented/executed code, cause a confusing/badly designed program generally translates to a less than useful model. It's not exciting or immediately rewarding, but I've had the best results with picking a spot and following the program execution line by line, documenting and adding comments as I go, and refactoring where applicable.
a good IDE (EMACS or Eclipse) could help in many cases. Also on a UNIX-platform, there are some tools for crossreferencing (etags, ctags) or checking (lint) or gcc with many many warning options turned on.
First, before trying to comprehend a function/method, i would refactor it a bit to fit your coding conventions (spaces, braces, indentation) and remove most of the comments if they seem to be wrong.
Then I would refactor and comment the parts you understood, and try to find/grep those parts over the whole source tree and refactor them there also.
Over the time, you get a nicer code, you like to work with.
I personally do a lot of drawing of diagrams, and figuring out the bones of the structure.
The fad de jour (and possibly quite rightly) has got me writing unit tests to test my assertions, and build up a safety net for changes I make to the system.
Once I get to a point where I'm comfortable enought knowing what the system does, I'll take a stab at fixing bugs in the sanest way possible, and hope my safety nets neared completion.
That's just me, however. ;)
i have actuaally been using the refactoring features of ReSharper to help m get a handle on a bunch of projects that i inherited recently. So, to figure out another programmer's very poorly structured, undocumented code, i actually start by refactoring it.
Cleaning up the code, renaming methods, classes and namespaces properly, extracting methods are all structural changes that can shed light on what a piece of code is supposed to do. It might sound counterintuitive to refactor code that you don't "know" but trut me, ReSharper really allows you to do this. Take for example the issue of red herring dead code. You see a method in a class or perhaps a strangely named variable. You can start by trying to lookup usages or, ungh, do a text search, but ReSharper will actually detect dead code and color it gray. As soon as you open a file you see in gray and with scroll bar flags what would have in the past been confusing red herrings.
There are dozens of other tricks and probably a number of other tools that can do similar things but i am a ReSharper junky.
Cheers.
Get to know the software intimately from a user's point of view. A lot can be learnt about the underlying structure by studying and interacting with the user interface(s).
Printouts
Whiteboards
Lots of notepaper
Lots of Starbucks
Being able to scribble all over the poor thing is the most useful method for me. Usually I turn up a lot of "huh, that's funny..." while trying to make basic code structure diagrams that turns out to be more useful than the diagrams themselves in the end. Automated tools are probably more helpful than I give them credit for, but the value of finding those funny bits exceeds the value of rapidly generated diagrams for me.
For diagrams, I look for mostly where the data is going. Where does it come in, where does it end up, and what does it go through on the way. Generally what happens to the data seems to give a good impression of the overall layout, and some bones to come back to if I'm rewriting.
When I'm working on legacy code, I don't attempt to understand the entire system. That would result in complexity overload and subsequent brain explosion.
Rather, I take one single feature of the system and try to understand completely how it works, from end to end. I will generally debug into the code, starting from the point in the UI code where I can find the specific functionality (since this is usually the only thing I'll be able to find at first). Then I will perform some action in the GUI, and drill down in the code all the way down into the database and then back up. This usually results in a complete understanding of at least one feature of the system, and sometimes gives insight into other parts of the system as well.
Once I understand what functions are being called and what stored procedures, tables, and views are involved, I then do a search through the code to find out what other parts of the application rely on these same functions/procs. This is how I find out if a change I'm going to make will break anything else in the system.
It can also sometimes be useful to attempt to make diagrams of the database and/or code structure, but sometimes it's just so bad or so insanely complex that it's better to ignore the system as a whole and just focus on the part that you need to change.
My big problem is that I (currently) have very large systems to understand in a fairly short space of time (I pity contract developers on this point) and don't have a lot of experience doing this (having previously been fortunate enough to be the one designing from the ground up.)
One method I use is to try to understand the meaning of the naming of variables, methods, classes, etc. This is useful because it (hopefully increasingly) embeds a high-level view of a train of thought from an atomic level.
I say this because typically developers will name their elements (with what they believe are) meaningfully and providing insight into their intended function. This is flawed, admittedly, if the developer has a defective understanding of their program, the terminology or (often the case, imho) is trying to sound clever. How many developers have seen keywords or class names and only then looked up the term in the dictionary, for the first time?
It's all about the standards and coding rules your company is using.
if everyone codes in different style, then it's hard to maintain other programmer code and etc, if you decide what standard you'll use have some rules, everything will be fine :) Note: that you don't have to make a lot of rules, because people should have possibility to code in style they like, otherwise you can be very surprised.