From a theoretical standpoint, is anything better than BitTorrent for file distribution? - file-transfer

If I close my eyes hard enough, I can remember the days when p2p meant one-to-one by default.
Despite it being established for nearly a decade, I still marvel at the quantum leap that is torrent distribution.
Which makes me wonder, and Wikipedia doesn't address this, can there be a better way? I'm talking on a theoretical basis here... is anything possibly faster than the torrent method for distributing files to a massive network? Or has Bram Cohen basically won the Internetz?
(Feel free to relocate this to a different exchange if it doesn't fit best here...)

A quick research paper I found online points out in the introduction some reasons why BitTorrent is so successful compared to other P2P architectures including the Tit-For-Tat (TFT) mechanism where by nodes/users preferable upload more to nodes/user that they can download from (this is the hope anyway). There are many other reasons why BitTorrent is so popular including: reliability due it's distributed nature (much like the internet itself); Speed and Scalability.
However I think theoretically it's not weather something is going to be better than BitTorrent but instead as this research paper proves there can be improvements made:
Analyzing and Improving BitTorrent Performance
Hope this helps and if you find any research papers post them back please

Bittorrent is actuallually a rather primitive protocol, and a huge step backwards from even eDonkey. In the beginning it was nothing more than a distributed FTP-equivalent, and they even now barely added the most important features back in.
Key things were/are:
Actually being decentralized. (Trackers were the centralized aspect. eDonkey failed in that too, but fixed it earlier than BitTorrent.)
No built-in search. (Even Napster already had a search function. Catalog websites don’t count, as they are yet another centralized point of failure.)
No encryption. (Has partially been fixed, but not in a truly trustworthy way.)
No anonymization. (This point still fully stands. Darknet protocols, except for TOR, are a better choice.)
There are much better much older protocols out there. Hell, when it came out, basically anything apart from FTP was vastly superior.
BitTorrent only became successful due to its speed. Which it “achieved” by throwing out nearly the whole point of p2p file sharing. Not a feat at all. Sure my car is faster, if it throws out any interest in comfort, safety or fuel efficiency. :)
But BitTorrent was never intended for file sharing anyway. It was literally intended to replace FTP/HTTP for downloads when the site couldn’t deliver the required bandwidth. And for nothing else.
For that it is OK. But otherwise it is the Windows ME of file sharing: A core designed for a much much simpler world, abused way past its design specs, with the features that are standard in all other systems badly tacked on.

Related

What are some new technical developments in operating systems?

I need to investigate some new technical developments in operating systems. What would be some interesting developments to write about?
I am looking at any interesting development within the last 5 years and have only come across Azure Sphere
I expect to write about 3 or 4 examples of new technical developments in operating systems
I've spent almost 20 years looking for something that's truly new (for operating systems); and the only thing that I can think of that's recent is the Spectre (and Meltdown) security disaster and associated mitigations.
Almost everything else is the same old ideas regurgitated/reimplemented with fresh marketing hype. For an example (your example) consider Azure Sphere - a low budget/low effort attempt at recycling an existing kernel (Linux, which itself is/was a reimplementation of a crusty lump of memorabilia from late 1960s) where the main technical achievement is slapping on a coat of paint in an attempt to convince suckers that a design originally intended for "dumb terminals connected to mainframe" actually makes sense for modern embedded "Internet of Things" purposes (it's like gluing a muffler on a horse and pretending it's a "new" kind of motorcycle).
Note that there are things that seem "new" but have nothing to do with the underlying OS. One example is augmented reality (e.g. hololens), which is new for user-space, but barely more than a few tweaks to a few APIs as far as the OS itself is concerned (if I remember right, Microsoft just use Windows10, which is mostly just an evolution of ideas originating from Windows NT from the 1990s, if not earlier).
Also note that a major part of the problem is that anything that is actually new breaks compatibility, so it either dies or gets nerfed until it's the same old stuff we've seen hundreds of times before. An example of this is "The Machine" (from HP) - a bunch of enthusiastic goals reduced to "Oops, we're repurposing that and, um, using Linux. None of the things that made it interesting survived. Sorry". The other part of the problem is difficulty.
Of course this shouldn't be unexpected. When any new technology (fire, wheels, combustion engines, electricity, ..) arrives you get a period of "pioneering" where new ideas and breakthroughs are frequent; but then the technology matures and these things become infrequent.
So...
For some advice that might be useful in practice (assuming this is a university assignment); "waffling" is a valuable skill to learn. Take anything that might be vaguely related and stretch the truth. Mumble. Be vague where it matters and provide unnecessary detail where it doesn't matter. Use words like "instantiationalising" to make your professors eye's glaze over so they can't find any meaning hidden by your words (and let them assume it's their fault for being too busy/distracted to understand, and not your fault). For help, try to read research papers in entirety and then see if you can figure out why it's impossible for anyone to remember anything more than "introduction and conclusion". With practice, a skilled waffler can fill 10+ pages without actually saying anything at all.

SmartGWT, ZK and GenericFrame - Online Homework

Good day,
Our school, a small high school in semi-rural New Zealand, is currently looking into online homework solutions. Being one of the IT guys, I have been asked to look into some of the options. We have checked around and there are no robust solutions that cover what we are looking for. So, we are considering development of our own system, either on our own or in collaboration with some other schools.
Before I put significant time into any one option, I would thought I should ask for some expert advice.
Please keep in mind that one of our major obstacles is that around 20% of our students are on dial-up because broadband is not available in their area.
We are also not limited to the technologies listed, they just are the ones that we have been looking into up to this point.
With that in mind, here goes.
1. Is there a way to pre-determine the bandwidth needed for these technologies?
2. If bandwidth continued to be too limiting, could the final solution stand alone so we could distribute it to students on CD or USB stick?
3. What are some pros/cons of each for use with databases, specifically mysql or postgresql? (After all we do need to keep track of lots of data)
4. What are some pros/cons of each for of these RIA development?
I appreciate everyone for sharing their time and expertise on the matter.
Cheers,
Ben
1) If you write full-AJAX application, such as in GWT, the bandwitch will be:
a) the size of application java script, images, etc., you may consider that everything is loaded when user logs in (cache for images may seems to be big, but it's easily overloaded)
b) the size of communication - in GWT it depends only from you! no magic full-frame reloading, sending is only what YOU are wanting to send
2) I do not catch your point, stand alone applications can be distributed such way, applications that use databases generally can't
3) postgresql has high compatibility with Oracle - same transaction+select for update behaviour, pgPLSQL is highly inspired by PL/SQL (easy to rewrite stored procedures).
I personally suggest MySQL for a school project for its simplicity. PostgreSQL is powerful but a bit complicate to configure and the visual tool for optimizing queries not good.
Without considering the bandwidth, I definitely suggest ZK since, again, it is much easier to learn, to develop and to maintain (also much more powerful). The bandwidth consumption and latency of GWT really depends how much effort you want to invest, and how skillful your people are familiar with distributed computing, while the network bandwidth is basically the states of UI (not data), which is reasonably small. In short, you could have the best network bandwidth and latency if you optimize it at the best with GWT, while ZK is less to worry but, if you want to improve, you have to use jQuery (i.e, in JavaScript).
Thanks lechlukasz, I appreciate your comments and insight.
I will clarify my point about stand alone applications. We have a number of students, as high as 20%, who do not have access to broadband due to their geographic location. We are considering, as part of the design, how we may be able to distribute a stand alone version.
For instance, if we were to abstract all the database calls using a separate class in GWT, we could recompile a stand alone version that didn't make the database calls. The database would likely only be for tracking results and reporting.
In reality, we would likely implement the front end product first with references to empty methods for storing the results in a database and implement those methods at a later time.
For the record, we have started to code up some test cases using GWT/SmartGWT and are pleased with the results. Although we cannot comment on the other technologies considered because we didn't try them to the same extent, we are pleased with the results to this point of the project.
Cheers,
Ben

Encouraging good development practices for non-professional programmers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.

How do I plan an enterprise level web application?

I'm at a point in my freelance career where I've developed several web applications for small to medium sized businesses that support things such as project management, booking/reservations, and email management.
I like the work but find that eventually my applications get to a point where the overhear for maintenance is very high. I look back at code I wrote 6 months ago and find I have to spend a while just relearning how I originally coded it before I can make a fix or feature additions. I do try to practice using frameworks (I've used Zend Framework before, and am considering Django for my next project)
What techniques or strategies do you use to plan out an application that is capable of handling a lot of users without breaking and still keeping the code clean enough to maintain easily?
If anyone has any books or articles they could recommend, that would be greatly appreciated as well.
Although there are certainly good articles on that topic, none of them is a substitute of real-world experience.
Maintainability is nothing you can plan straight ahead, except on very small projects. It is something you need to take care of during the whole project. In fact, creating loads of classes and infrastructure code in advance can produce code which is even harder to understand than naive spaghetti code.
So my advise is to clean up your existing projects, by continuously refactoring them. Look at the parts which were a pain to change, and strive for simpler solutions that are easier to understand and to adjust. If the code is even too bad for that, consider rewriting it from scratch.
Don't start new projects and expect them to succeed, just because your read some more articles or used a new framework. Instead, identify the failures of your existing projects and fix their specific problems. Whenever you need to change your code, ask yourself how to restructure it to support similar changes in the future. This is what you need to do anyway, because there will be similar changes in the future.
By doing those refactorings you'll stumble across various specific questions you can ask and read articles about. That way you'll learn more than by just asking general questions and reading general articles about maintenance and frameworks.
Start cleaning up your code today. Don't defer it to your future projects.
(The same is true for documentation. Everyone's first docs were very bad. After several months they turn out to be too verbose and filled with unimportant stuff. So complement the documentation with solutions to the problems you really had, because chances are good that next year you'll be confronted with a similar problem. Those experiences will improve your writing style more than any "how to write good" style guide.)
I'd honestly recommend looking at Martin Fowlers Patterns of Enterprise Application Architecture. It discusses a lot of ways to make your application more organized and maintainable. In addition, I would recommend using unit testing to give you better comprehension of your code. Kent Beck's book on Test Driven Development is a great resource for learning how to address change to your code through unit tests.
To improve the maintainability you could:
If you are the sole developer then adopt a coding style and stick to it. That will give you confidence later when navigating through your own code about things you could have possibly done and the things that you absolutely wouldn't. Being confident where to look and what to look for and what not to look for will save you a lot of time.
Always take time to bring documentation up to date. Include the task into development plan; include that time into the plan as part any of change or new feature.
Keep documentation balanced: some high level diagrams, meaningful comments. Best comments tell that cannot be read from the code itself. Like business reasons or "whys" behind certain chunks of code.
Include into the plan the effort to keep code structure, folder names, namespaces, object, variable and routine names up to date and reflective of what they actually do. This will go a long way in improving maintainability. Always call a spade "spade". Avoid large chunks of code, structure it by means available within your language of choice, give chunks meaningful names.
Low coupling and high coherency. Make sure you up to date with techniques of achieving these: design by contract, dependency injection, aspects, design patterns etc.
From task management point of view you should estimate more time and charge higher rate for non-continuous pieces of work. Do not hesitate to make customer aware that you need extra time to do small non-continuous changes spread over time as opposed to bigger continuous projects and ongoing maintenance since the administration and analysis overhead is greater (you need to manage and analyse each change including impact on the existing system separately). One benefit your customer is going to get is greater life expectancy of the system. The other is accurate documentation that will preserve their option to seek someone else's help should they decide to do so. Both protect customer investment and are strong selling points.
Use source control if you don't do that already
Keep a detailed log of everything done for the customer plus any important communication (a simple computer or paper based CMS). Refresh your memory before each assignment.
Keep a log of issues left open, ideas, suggestions per customer; again refresh your memory before beginning an assignment.
Plan ahead how the post-implementation support is going to be conducted, discuss with the customer. Make your systems are easy to maintain. Plan for parameterisation, monitoring tools, in-build sanity checks. Sell post-implementation support to customer as part of the initial contract.
Expand by hiring, even if you need someone just to provide that post-implementation support, do the admin bits.
Recommended reading:
"Code Complete" by Steve Mcconnell
Anything on design patterns are included into the list of recommended reading.
The most important advice I can give having helped grow an old web application into an extremely high available, high demand web application is to encapsulate everything. - in particular
Use good MVC principles and frameworks to separate your view layer from your business logic and data model.
Use a robust persistance layer to not couple your business logic to your data model
Plan for statelessness and asynchronous behaviour.
Here is an excellent article on how eBay tackles these problems
http://www.infoq.com/articles/ebay-scalability-best-practices
Use a framework / MVC system. The more organised and centralized your code is the better.
Try using Memcache. PHP has a built in extension for it, it takes about ten minutes to set up and another twenty to put in your application. You can cache whatever you want to it - I cache all my database records in it - for every application. It does wanders.
I would recommend using a source control system such as Subversion if you aren't already.
You should consider maybe using SharePoint. It's an environment that is already designed to do all you have mentioned, and has many other features you maybe haven't thought about (but maybe you will need in the future :-) )
Here's some information from the official site.
There are 2 different SharePoint environments you can use: Windows Sharepoint Services (WSS) or Microsoft Office Sharepoint Server (MOSS). WSS is free and ships with Windows Server 2003, while MOSS isn't free, but has much more features and covers almost all you enterprise's needs.

Neural Networks or Human-computer interaction

I will be entering my third year of university in my next academic year, once I've finished my placement year as a web developer, and I would like to hear some opinions on the two modules in the Title.
I'm interested in both, however I want to pick one that will be relevant to my career and that I can apply to systems I develop.
I'm doing an Internet Computing degree, it covers web development, networking, database work and programming. Though I have had myself set on becoming a web developer I'm not so sure about that any more so am trying not to limit myself to that area of development.
I know HCI would help me as a web developer, but do you think it's worth it? Do you think Neural Network knowledge could help me realistically in a system I write in the future?
Thanks.
EDIT:
I thought it would be useful to follow-up with what I decided to do and how it's worked out.
I picked Artificial Neural Networks over HCI, and I've really enjoyed it. Having a peek into cognitive science and machine learning has ignited my interest for the subject area, and I will be hoping to take on a postgraduate project a few years from now when I can afford it.
I have got a job which I am starting after my final exams (which are in a few days) and I was indeed asked if I had done a module in HCI or similar. It didn't seem to matter, as it isn't a front-end developer position!
I would recommend taking the module if you have it as an option, as well as any module consisting of biological computation, it will open up more doors should you want to go onto postgraduate research in the future.
The worthiness depends on three factors:
How familiar are you with the topic already?
How good is the course/class you want to take?
What are your interested in more?
Especially for HCI, there is a broad range of "common sense" information you would also easily obtain from reading a good book or a wider range of articles about it also published on the internet. On the other hand, there indeed exist many deeper insights mostly obtained by Psychology studies. If the course is done right, you can indeed learn a lot about the topic and the real considerations to use for developing an interface.
For Neural Networks, one has to say that this is a typical hype topic. It would be mainly interesting in what application domain the course wants to deal with neural networks. You can be quite sure that you won't program or use any neural networks for web development. On the other hand, if the course is done right, this could be a good opportunity for you to broaden your knowledge. Especially, deepening your understanding about the theory of computer science. This highly depends on how the course is laid out, though.
HCI is a topic which helps your career as a web developer, but only if you feel incompetent in that topic (then it is a must) or it is done very well. Neural Networks is a topic which has more potential of being really interesting hardcore computer science stuff, where you indeed learn a better understanding about something. If you are interested in NN, you should not pass the opportunity to get an education which is not narrowly concentrated on the domain of web development -- and, after all, perhaps find more interest in other stuff (it is always good to know other directions you would perhaps like to go into for the future).
Neural networks sound cool until you read the fine print:
In modern software implementations of
artificial neural networks the
approach inspired by biology has more
or less been abandoned for a more
practical approach based on statistics
and signal processing.
This is something that has mystified me for years. Here you have an amazingly complex and powerful control system (real-world biological neural networks), and an academic discipline that appears to be about modeling these systems in software but that has in reality abandoned that activity.
If you're doing web development, your time is probably better spent in the HCI course.
Go with what interests you the most. The HCI stuff will be much easier to pick up later as needed, you'll likely never get another chance to learn about neural networks!
For prospective employers (at least the good ones!) you need to show a passion and excitement about what you do. I'd sooner hire someone who can enthusiastically talk about neural networks than someone who has an extra credit in HCI.
Unless you want to do the research end of the world, ie, get a Masters/PhD, go HCI.
I studied Neural Computation at University when I studied AI. I now run my own company. The number of times since I studied that I have used my NN skills equals zero. I'm glad I did it, as it was quite fascinating, but I would have found HCI much more useful from the position I'm at now. I think that you'd pick up a lot more insight from an HCI course relevant to the software industry, but if you think you experience should be more on the esoteric/almost arty side of development, go for NN.
Which sounds like more fun? Or, equivalently, which will you work harder at? Pick that one.
Did two courses in NN and some other AI-courses - its fun to poke round with that stuff and I actually managed to implement the stuff in some of the things I've done like face-recognition, and it's useful in some other areas to if you wanna plot your lab data etc. I have never used the NN:s in my web development career though I am sure it could be used for something however what it all really boils down to is to find a client or employee willing pay for it when you can just take the straight path. So I would rather read book about it if I wasn't that hardcore about it.
Fundamental Neural Networks doesn't take to much knowledge in math, and was what I used in my first course.
as a programmer to be you need the knowledge of neural network. if parallel processing is the way to go in hardware then future programmers must be knowledgable in neural network. don't forget that NN works better with noise or imprecise data but other systems may not. Note that most data we use for analysis are sample data which is a fraction of the whole and you could imagine if some in the sample are way off. so you need knowledge of NN if you want to last in computer programming field.