Group games to teach computer programming (either functional or imperative) - android-activity

(See end for summary of updated question.)
I want to convey to groups of people (kids or adults) how a computer program written in a high-level language works, and what the relationship is of that program to the computer as a consumer device as they know it (a TV-like box that "does" typing and "internet").
I want to do it without computers. Not because I don't have them, but because I want a fun, physical activity that involves people the way acting, dance, music, sports, and capture-the-flag are fun.
I have read Teaching beginner programming, without computers here on stackoverflow; its reference to Computer Science Unplugged comes closest, but most of the activities there are either too complex, require too many props, or focus on specific computer science concepts.
I have also read Games that teach Programming Fundamentals but almost nothing matched my description in my first paragraph above.
And just for good measure, I have read Should functional programming be taught before imperative programming? so I am open to activities to teach either of those.
Keep in mind these requirements, some of which are subjective:
physical
no props (or very few)
fun
involves as many of the senses as possible
simulates the experience of writing a program and running it on a computer
no computers anywhere in the picture
is a game (competitive or cooperative)
It occurs to me that one source of material might be those team-building games that companies send you on. But those are designed for team-building, not teaching what writing and running a computer program is. But maybe you get the idea. Another way of looking at this question is to suggest what search terms I should use to find more answers -- though I usually can pick good search terms, an implicit "or" of "computers" and "games" will not find what I want because that combination is reserved for something totally different.
Update:
Thanks for responses so far!
I have now clarified that I'm interested in simulating the operation of a high-level-language program rather than either how the machine operates (1's and 0's) or specific concepts
With that clarification, you will be able to say specifically whether your game suggestion or game found teaches about functional or about imperative programming
With that clarification, please also respond to the part about games to teach the relationship of a computer program to the computer. What needs to be taught is that other consumer devices that physically look similar do not have "programs" -- why?
Your direct answers are much appreciated; if you can also find more ready-to-use sources beyond Computer Science Unplugged that will be great too
See my comments on answers so far, all of which are made in the spirit of thanks for what you've written, and not meant to be critical in any way.

Fundamentally, computers only do a few, very simple things:
They can do basic math,
They can move data from one place to another,
They can loop, and
They can make simple decisions.
The power of computers lies in the fact that they can do these simple things millions of times per second.
At the physical game level, I believe this is about all you can teach. Beyond that, I believe computer simulations and/or multimedia presentations are required (or, at the very least, a whiteboard).

1. Human Bubble Sort
Just test the Human Bubble Sort => ask a group of people - I'd recommend from min. 4 to max. infinite :-) - to sort themselves on the Bubble Sort principle, based on the alphabetical order of their family name.
Example : https://www.youtube.com/watch?v=8QD-R_MfDsQ
Works for kids and grownups.
2. Human Frenzy Robot
With physical people, paper sheets, and arrows + symbols written on them, reproduce the principle of the Frenzy Robot in real life. Look for "lightbot" on Google - I cannot post more than two links yet. I've just created my account to answer here :-)
3. Primo
For very young kids (after 4 years old), I really like Primo, a programmable small toy you put in motion on a grid => http://www.primotoys.com/

You could demonstrate thread locking by having two teams competing to get two halves of a key that opens the door to some reward (sweets for kids etc.). Each team grabs half the key each and then neither can open the door. If they cooperate then they both get the reward.
This might be a bit advanced - not sure now having re-read it.

It really was fun in CS Class: The Living Turing Machine.
You need:
Some place to place the formal rules of the machine, in the beginning it's pure chaos :-D
Humans:
a. A bunch of people that stand in line and simulate the linear memory, you just need a way to distinguish between 'ones' and 'zeros'. We did this by standing in the foreground or in the background, but I could also imagine other ways...
b. One person for every state of the machine
c. A 'reading head' which moves left or right on the memory.
Now you just need sample programs, start simply, for example with inverting a pattern. Then go on to more complex programs like increment/decrement.

For inspiration : an example of how physical people can materialize the Bubble Sort algorithm through dance => https://www.youtube.com/watch?v=lyZQPjUT5B4

Related

Siemens PLC programming best practices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
My question is pretty simple. Is there any useful place for learning to work with Siemens PLCs?
Full Disclosure:
I was a Software Engineer for Rockwell Automation working with their A|B PLCs
You probably won't like my answer
To put it plainly programming PLCs whether you're using Ladder Logic, Structure Text, Instruction List, Sequential Flow Chart, FBD, or Continuous Flow Chart isn't the same as programming software in a language like C++, Java, JavaScript, etc...
Simply put, there is not one set of "best practices" that fits every use case. The reason for that is, because unlike your standard software development which you can apply principles like the SOLID principles to always make your code easier to read, maintain, and extend. PLC programs are associated with a very real physical process and physical machinery. Often times what you find in the industry is that every plant/manufacturer/facility establishes their own set of best practices given their facilities needs and process.
To give an example:
Scenario 1:
The logic used to run the distillation process for a small local brewery may include sub-routines or even a loop. They may allow 5 or less warnings in their code, and allow a few unused tags. That is totally fine, because they are making beer, the process isn't critical, a bad batch won't kill anyone, and they only have 2 pumps that their using the logic to iterate over. So if there is a problem that needs trouble shooting the logic in the sub-routines or loop won't be too much of a headache.
Scenario 2:
I am a global pharmaceutical company producing 100's of millions of life critical drugs each year (say insulin). Now my logic is has zero sub-routines, no looping, I have zero tolerance for errors or warnings, and absolutely no unused tags. Why, because I am a highly regulated industry and if their is an issue with one of my products, people may die. Also why no sub-routines or looping, because I am a huge company with hundreds of pumps, mixers, etc... When one of those pieces of equipment go down I don't want to look at some horrible looping logic that is responsible for the logic of hundreds of pumps. I want to look at one select piece of the logic that I can quickly understand, correct, and get my line back up and operating.
I am sure you can find some articles or courses out there (like the one you already took) that explains some basic "best practices", but in the real world you will need to adapt your logic to every individual scenario in order to achieve the best outcome. That is my humble two cents on the matter, best of luck to you!
Udemy - there are some courses there, though I haven't tried them myself.
I've watched lots of useful videos on YouTube.
http://www.plcdev.com/siemens_simatic_step_7_programmers_handbook -
quite old, but could be usefull.
Siemens forums, official manuals, guides. There is lots of info there, quality varies sometimes, but mostly good.
BTW, a nice thing about Siemens is that you can often look up things just by searching the web. That is not the case for some other PLCs...
Good luck!
If you work already in a factory. Read the code that's run in PLC-s. And start modifying it, if needed. Thats how i started, I was initially lowly automation guy. Pulled cables, changed broken sensors etc.
If you don't, and you need a break to the field, then as ordinary tech worker, the path is usually from electrician or automation engineer. Or as entreprenuer/independent contractor, i have seen people just do it. Like win a contract for some public company request, do some schematics, write code, do electrical montage all by themselves. Or just do parts of it with other contractors. You need previous experience to pull it off
As for some practices:
If you are modifying existing code. Always use existing style, existing functions and blocks.
Do not use programming patterns from ordinary IT world in low PLC code. Or use with caution. Reason for this is that your code probably has to live for years and years, and has to be debuggable. Patterns usually add layers of complexity, complexity leads to harder debugging. In automation world it's usually better to debug stuff thats closer to hardware.
If you are starting to make project where you have tens or hundreds of sensors/motors/actuators, start using reusable blocks.
All best practices are learned in the field, sadly theres no other way. I know it's kind of catch22 sometimes. Need work to get experience, need experience to get work. I entered automation world, and later IT world the same way: get a job and the low end, maintanence guy or junior IT developer, gather experience, in a year or two you will be in mid-level.
And don't lose any of those constraints while your programming PLC :
PLC programming is very low level programming
memory size matters, each byte must be important
logical have to be concise and as short as possible : sometimes you have to be good in math !
the machine you're working on is dangerous and can provoque product, equipment or human damages
the machine you're working on is expensive and is built to produce for years
It's the same as in computer programming : each programmer has its own way to program, there's no truth. Sometime you'll find some interesting existing code : don't hesitate to re-use it if it looks smarter and is more efficient.
Find your way and keep in mind the machine you're working on is dangerous for you and the people walking around (it's not always the case but it's important to keep this in mind while programming).
And moreover: don't forget the first rule in industrial automation : if it runs correctly, don't touch it !

real-time synchronization between two devices over a wireless signal

i have never done embedded (i dont know if thats what you call this) programming and know nothing about it. my question:
is it possible to have two devices sharing a wireless connection (no internet, just between themselves, perhaps bluetooth, but i dont know what ever is best) ?
is it possible to have one editing a file and the other person editing the same file and they can see changes in real time? sort of like google docs?
does this exist already?
what can i do to get started regarding this kind of programming?
to clarify:
i want two people with iphones or any other hand held device, to be able to edit a text file at the same time and see each other's changes in real-time. how do i do this?
There are a bunch of slightly strange assumptions hidden in your questions. I'll try to unpick them as best as I can.
You've used "embedded" programming in a strange way. Usually this would suggest some kind of low-power devices used in settings without direct user interaction in some sense (e.g. factory controllers, refrigerator controllers, sensor nodes), performing a very specific task, but you've gone on to talk bout people editing files. What exactly would be the user interface here? What would make this embedded programming? I think you need to describe an application before any advice can be offered.
If you actually mean embedded devices, then whether they can connect wirelessly to one another is going to depend on the nature of the device. Similarly, the protocol/technologies involved will depend on the device. Embedded programming tends to be very much device-specific. There certainly exist wireless sensor nodes, for example, that incorporate small radio transceivers for serial comms.
Google docs already exists. Without a clearer problem description it's difficult to say whether what you want exists already or not.
I think you should really figure out exactly what kind of programming it is that you want to do before we can offer points as to how to best get started with it. Maybe look up a definition of "embedded programming" and see how this relates to your goals such that you can reformulate your questions a little more clearly.
I'm not sure how "real time" would fit into this scenario either. This term is used and abused in many ways. Things are only ever real-time with respect to some constraint, usually defined in terms of the application.
(Note: This might have been more appropriate as a comment, but I felt there was too much to respond to in order to sum up within character limits, and I hope correcting some of the confusion constitutes something of an answer, given the limitations of the question).
Two devices can share a connection like this. It's done all the time. There are many many protocols for this. Weather or not it is wired or wireless or uses the Internet doesn't really matter for 90% of this.
This is sort of doable, but not really. You really have a race condition when two people are editing at the same time. This is generally avoidable by locking out small parts of the document at from all but one editor at a time (like only one person being able to edit one cell of a spreadsheet at a time), but this has problems too (like of the one active editor is taking way too long -- this is a problem seen in many source version control systems too).
1 already exists in many many forms. 2 sort of exists in many forms, but the problems I mentioned are impossible to completely overcome.
The way you asked this question leads me to believe that you are very far from being able to do this. In addition, you didn't tell us anything about what you do know how to do. Can you write a simple text editor for an iPhone (or anything else)? Simple text editors from scratch that aren't crappy aren't easy to write.
What you need to do, if you really want to do this, is to come up with a protocol for the two (or more) devices to talk to each other in. To do this it is probably best if you figure out what type of communication is available between the devices and which of those you will use and what features it does not provide that you will need on top.
You could try to send patches of the file (or something similar) between the two devices as edits are made, but then you'll have to decide what to do in the event of a collision (edits near the same place).
Alternately you could have the two devices exchange permission to make edits (like in token ring networks).
You still have a problem if the two devices lose communication with each other during the editing of the file, though. With the token ring type setup you stand the possibility of losing the token and neither being able to automatically recover easily. Whatever you do you end up with the problem of the two ending up with different ideas of the file's contents.
"iphones or any other hand held device" - the technology stack to do that doesn't exist today. You have to co-ordinate between multiple languages and systems. (Okay, maybe you want to write that software, but it's a huge undertaking).
Your best bet would be to create a web page that all of the mobile browsers can work on and save a text file from.
Of course it's possible. Bluetooth does this. Wi-Fi does this if you join an ad-hoc network.
Of course it's possible. Just run the Google Docs server on one of the devices.
It might.
Way too vague.

Requirements for a game

I'm writing an iPhone game and I am trying to write some requirements documents. I have never written requirements before so I got the book Software Requirements. I have not finished it yet, but I forsee some issues, as this book is targeted towards a business. My main question is I am the only person involved with this game and I feel the main purpose of the requirements document should be to nail out as many conceptual ideas of how the game works as I can before I am deep into design or construction. Does anyone have suggestions on how I should lay this out, should I still try to mimic the template provided in the book where it makes sense, or since I am both the sole developer and product owner, should I just stick to game concepts?
You're right that traditional SRS documents don't really fit games documentation all that well. Games instead have a general Game Design Document. It's usually created before any work on the game begins, and it's often edited as the development process goes to keep straight the intended end-result and specifics of the game.
While business software requirements documents are like contracts between a client and developer on what to produce, game design docs are more often specifications from the designer to the artists and programmers on what exactly they need to develop.
There is no specific layout to use. But you should consider who you're writing the document for. Is it for a class, for yourself, for peers after the project is done? The level of detail and the kind of things you include will be different depending on your audience. The format itself is very flexible, as long as it's coherent.
Brenda Brathwaite has a good blog entry on this subject which you might find helpful.
There is a semi-recent article from gamedev.net on the subject as well.
[Poor Jacob, you just read a book on the topic, and, collectively, the SO community writes another one for you, along with extra links, and probably with diverging views ;-) ]
Although I'm not familiar with the book you mention in the question, I think that the following suggestion may help you both take seriously, but also relax a bit, about the all too important question of requirements.
Being a "team of one", it is particularly important, and somewhat paradoxical, that you go through the effort of formalizing the requirements. However, rather than putting too much emphasis on the form, you may find an Agile approach to developement (and hence to requirements gathering) more appropriate. With regards to requirements, one of the main advantages of this approach, is flexibility, i.e. the understanding that while they should be formalized (with limited time/effort), requirements should be allowed to change (within limits) as part of an iterative process towards production of the target product.
In very broad terms, this generally go as follows:
write "user stories", these are individual "cards" (yes, physical cards, say 4 inches by 5 inches, are good, for you can then move then around, sort them etc.)
each story tells a particular feature of the application, here the game, from the end-user's perspective. You can/should start all cards with "As a user, I need the game to..." then follow with a particular feature, for example "... show my high score on the same page as the global high-scores are kept [because ... here optional reasons for why user may want this feature].
review each story and assign a rough estimation of the time involved in implementing it
review each story and assign a priority level (scale may vary, but something simple like "Must have from Version 1.0", "Should eventually be in there, for sure", "Would be nice to have" and "Maybe nice to have...")
organize releases, on the basis of what you can do within say 2 or 3 weeks, maximum. If a particular feature were to take too long, schedule it for a later release.
implement the features assigned to the current release
iterate through this release cycle, reviewing the requirements as you go, for the relative importance of features, and also the need of new features may become evident as with the insight provided by using the [incomplete/imperfect] intermediate releases.
Books like the one you describe are focused at a different audience, but there is value in the general concepts presented. Fully developed requirements documents are not as common as you might think. Don't let anyone think that you are a 'bad developer' for not having the most detailed requirements.
Requirements docs might be more important if you need to communicate the requirements with a co-developer.
If you are the sole developer I would strongly recommend that you spend your efforts on the design and implementation of the game, over requirements. If you have a good idea of what you build then let this flow as you build it.
Documentation can help you. The question is what is going to be most beneficial. Maybe design decisions are more critical than requirements for you but not for others. You'll maybe want to have a list of things that people have requested or ideas that you think of but cannot implement straight away. Sometimes a whiteboard can be handy for sketching out things, it's not just a tool for collaboration with other people.
Here's just a general approach...
Solidify the concept...write it in plain English first (ex: The game is a first person shooter. You kill zombies and hunt for treasure.)
Get a paper pad and pencil and draw out the general flow of the game and the main screens the users will encounter...main menu, options screen, help, etc. Make sure it makes sense.
Go to a site like mockingbird and create the detail wireframes for your screens...
Print these out and do some paper prototyping...i.e. put the printout in front of you and 'click' on a button...then bring up the appropriate screen...then click on another button, etc.
Once that makes sense, you can try to start coding your game.
Personally I believe you should use your own way to do this. The most commonly available one's will not match with your requirement. They might be suitable for a common commercial server application but not for a game. And since iPhone gaming is a new trend you may have to look in a different perspective.. You may not be able to fill a document with standard requirements and you may have different set of New type of requirements.
Just a suggestion... Sign up with Google Sites, and create a private site with documentation of the game, requirements, technical aspects, work log, etc... You can share it with select people, and it always keeps edit history.
I like it better than a Wiki because it is more structured, and just plain simple to use.

The Framework/IDE Knowledge Trap [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We don't teach children calculus first. We first teach them arithmetic, then algebra, then geometry, the analytical geometry, then finally calculus.
Why then, do we teach our computer scientists frameworks and IDE first. Some curriculum do force students to learn computer science fundamentals, but the vast majority of graduates that I see could not compose a framework of their own to save their lives.
Where then is the next generation of tool builders?
How can we promote the understanding necessary to create frameworks and development environments?
This is of course a generality. Not all education is lacking, but it seems to be the majority and it brings down the quality of our profession as a whole.
I think the analogy is a bit off. A better analogy would be "We don't teach our kids to use calculators to add and subtract, why teach programmers to use an IDE to program?"
Get rid of HR departments that require X years experience in Y. The universities are just tailoring their course to the HR department's requirements.
I employ graduates who can code in something (I really don't care what language) and who can learn.
I see your point, although I think the math analogy doesn't quite fit. You have to know basic arithmetic to be able to get anything done in any other math discipline.
When I began programming frameworks were mostly unheard of. If you wanted a binary tree, by God, you went and wrote one. In C or Assembler. That was basically it, so to get anything done at all you had to know a lot.
Today, Frameworks and IDEs and designers make it possible for "noobs" to create actually pretty brilliant things without knowing the first thing about how to build a framework, or a compiler, or manage memory allocation.
The real issue is, what about all the dingbats that think they are awesome, great programmers because they used Frontpage or Access? Managers have a hard time telling the difference between that kind of programmer and one that really knows software development as a discipline.
So, specifically, why is it that way? Because everyone wants a job and nobody hires programmers that know how to build a binary tree. They want programmers that know .Net or J2EE, etc.
I would argue that there is probably enough work out there for 9 to 5 programmers who can start at the framework level and go up from there. The truly good ones - mostly your program as a career and/or program as a hobby - are going to get the knowledge they may have missed in college over time anyway. You can't force everyone to be a wonderful programmer no matter what curriculum you teach. Inquisitive students are going to learn about the fundamentals whether its taught to them in class or entirely on their own.
There are tool makers and tool breakers. And of course there are tools, but let's not go there.
If you have a good look at an automotive workshop, you will see a lot of funny little tools that you don't see on the shelves in hardware stores. Like the ones for pushing back brake caliper pistons. Or the clamps for compressing valve stems so you can get the collets out with one hand while talking to your mates about nailing the new secretary (instead of watching them fly across the room when the spring slips out from your screwdriver).
These were designed by mechanics. They're really effective, generally small and cheap, and totally incomprehensible until you seen them in action.
Most of the profound changes in automotive technology were bottom-up, but top-down is also needed. Individual mechanics can't make fundamental technology changes like the switch from cast iron to alloy heads. A new broom sweeps clean, an old broom knows the corners. You need both.
But I digress: the point is that the mechanics couldn't design these tools if they lacked fundamental skills and knowledge. My father built me an entire motorcycle from scrap iron when I was a kid. As an adult, because I lack his skills and knowledge and modes of thought, I can barely maintain the bike I bought from Honda, much less take to it with an oxy like Mr T in a creative frenzy.
With code, I am as my father was with steel. Donald Knuth is my constant companion, and when the wireless protocol for our GPS loggers needs to be implemented in .NET it's me they come to see. The widget monkeys wouldn't know where to start.
I think the problem is in fact the GUI paradigm in general.
Microsoft made using computers much easier, they popularized the Graphical User Interface. They brought this interface metaphor, (the desktop, the file) to the domain of programming as well and very effectively too with their Visual Basic tool.
But just as the GUI obscures what happens "under the hood" so does the IDE obscure the manipulation of bits and bytes. The question is, of course, risk to reward ratio - how much understanding do programmers lose in exchange for productivity?
A cursory look at "The Art of Computer Programming" might show why IDEs are useful; "The ultimate packing density is achieved when we have 1-bit items, because we can cram 64 of them into a single 64-bit word. Suppose, for example, that we want a table of all odd prime numbers less than 1024, so that we can easily decide the primality of a small integer. No problem; only eight 64-bit numbers are required:
p0 = 011101101101001100101101001001001100101100101001000101101101000000
p1 = . . ."
Programming is really hard, you can see how an IDE might help. :^)
Learning the abstraction is easier than learning the details when it comes to programming. It's harder to teach someone to hand-code assembler to print "Hello World" than it is to have them throw together a form with a button on it that shows a "Hello World" message when the button is clicked.
You didn't know how to build the engine of a car before learning to drive, did you? Because it's not necessary in order to drive. In the same vein, you don't need to learn how a linked list or binary tree works in order to maintain a list of names and search them.
There will always be those who want to get under the hood and learn the "why" of things, but I don't think it's required to get things done.
I always screen applications by asking difficult questions that they could only answer if they understood how something really works. I think it is a real shame colleges and universities are teaching people framework based development but not focusing on core software principles. I agree that what matters more than anything is someone who understands how programming works and has the drive to learn anything they can about it.
Most universities I know of have an introduction to computer programming course that teaches basic programming concepts. Unfortunately it is impossible to teach programming without actually writing code.
The problem is that some prefer to teach this course using some OO language such as JAVA or C# and so the students must use Visual Studio (or the Java equivalent).
It is very hard to explain the basic concepts when the IDE forces you to work in a certain way.
I think that the first language students learn should be functional language such as C. This way you have less layers of abstraction between them and the basic CS concepts.
Agree with cfeduke.
I looked at the work for the same CS courses I did from 2 years previously, and they were way harder. 5 years previously, way way harder.
The CS bar is being lowered more and more, presumably because there are more and more jobs that don't require any working knowledge of any of the complicated CS subjects. There are huge numbers of jobs for people to just cut code.
Since traditionaly people who wanted to be programmers did CS courses as coding has gotten easier this is still the case.
What really needs to happen is for CS to not be a requirement for professional software development. Instead there needs to be another curriculam that focuses more on getting people out the door and cutting code.
This would leave CS to be that course for you next generation of tool builder.

Neural Networks or Human-computer interaction

I will be entering my third year of university in my next academic year, once I've finished my placement year as a web developer, and I would like to hear some opinions on the two modules in the Title.
I'm interested in both, however I want to pick one that will be relevant to my career and that I can apply to systems I develop.
I'm doing an Internet Computing degree, it covers web development, networking, database work and programming. Though I have had myself set on becoming a web developer I'm not so sure about that any more so am trying not to limit myself to that area of development.
I know HCI would help me as a web developer, but do you think it's worth it? Do you think Neural Network knowledge could help me realistically in a system I write in the future?
Thanks.
EDIT:
I thought it would be useful to follow-up with what I decided to do and how it's worked out.
I picked Artificial Neural Networks over HCI, and I've really enjoyed it. Having a peek into cognitive science and machine learning has ignited my interest for the subject area, and I will be hoping to take on a postgraduate project a few years from now when I can afford it.
I have got a job which I am starting after my final exams (which are in a few days) and I was indeed asked if I had done a module in HCI or similar. It didn't seem to matter, as it isn't a front-end developer position!
I would recommend taking the module if you have it as an option, as well as any module consisting of biological computation, it will open up more doors should you want to go onto postgraduate research in the future.
The worthiness depends on three factors:
How familiar are you with the topic already?
How good is the course/class you want to take?
What are your interested in more?
Especially for HCI, there is a broad range of "common sense" information you would also easily obtain from reading a good book or a wider range of articles about it also published on the internet. On the other hand, there indeed exist many deeper insights mostly obtained by Psychology studies. If the course is done right, you can indeed learn a lot about the topic and the real considerations to use for developing an interface.
For Neural Networks, one has to say that this is a typical hype topic. It would be mainly interesting in what application domain the course wants to deal with neural networks. You can be quite sure that you won't program or use any neural networks for web development. On the other hand, if the course is done right, this could be a good opportunity for you to broaden your knowledge. Especially, deepening your understanding about the theory of computer science. This highly depends on how the course is laid out, though.
HCI is a topic which helps your career as a web developer, but only if you feel incompetent in that topic (then it is a must) or it is done very well. Neural Networks is a topic which has more potential of being really interesting hardcore computer science stuff, where you indeed learn a better understanding about something. If you are interested in NN, you should not pass the opportunity to get an education which is not narrowly concentrated on the domain of web development -- and, after all, perhaps find more interest in other stuff (it is always good to know other directions you would perhaps like to go into for the future).
Neural networks sound cool until you read the fine print:
In modern software implementations of
artificial neural networks the
approach inspired by biology has more
or less been abandoned for a more
practical approach based on statistics
and signal processing.
This is something that has mystified me for years. Here you have an amazingly complex and powerful control system (real-world biological neural networks), and an academic discipline that appears to be about modeling these systems in software but that has in reality abandoned that activity.
If you're doing web development, your time is probably better spent in the HCI course.
Go with what interests you the most. The HCI stuff will be much easier to pick up later as needed, you'll likely never get another chance to learn about neural networks!
For prospective employers (at least the good ones!) you need to show a passion and excitement about what you do. I'd sooner hire someone who can enthusiastically talk about neural networks than someone who has an extra credit in HCI.
Unless you want to do the research end of the world, ie, get a Masters/PhD, go HCI.
I studied Neural Computation at University when I studied AI. I now run my own company. The number of times since I studied that I have used my NN skills equals zero. I'm glad I did it, as it was quite fascinating, but I would have found HCI much more useful from the position I'm at now. I think that you'd pick up a lot more insight from an HCI course relevant to the software industry, but if you think you experience should be more on the esoteric/almost arty side of development, go for NN.
Which sounds like more fun? Or, equivalently, which will you work harder at? Pick that one.
Did two courses in NN and some other AI-courses - its fun to poke round with that stuff and I actually managed to implement the stuff in some of the things I've done like face-recognition, and it's useful in some other areas to if you wanna plot your lab data etc. I have never used the NN:s in my web development career though I am sure it could be used for something however what it all really boils down to is to find a client or employee willing pay for it when you can just take the straight path. So I would rather read book about it if I wasn't that hardcore about it.
Fundamental Neural Networks doesn't take to much knowledge in math, and was what I used in my first course.
as a programmer to be you need the knowledge of neural network. if parallel processing is the way to go in hardware then future programmers must be knowledgable in neural network. don't forget that NN works better with noise or imprecise data but other systems may not. Note that most data we use for analysis are sample data which is a fraction of the whole and you could imagine if some in the sample are way off. so you need knowledge of NN if you want to last in computer programming field.