Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
My question is pretty simple. Is there any useful place for learning to work with Siemens PLCs?
Full Disclosure:
I was a Software Engineer for Rockwell Automation working with their A|B PLCs
You probably won't like my answer
To put it plainly programming PLCs whether you're using Ladder Logic, Structure Text, Instruction List, Sequential Flow Chart, FBD, or Continuous Flow Chart isn't the same as programming software in a language like C++, Java, JavaScript, etc...
Simply put, there is not one set of "best practices" that fits every use case. The reason for that is, because unlike your standard software development which you can apply principles like the SOLID principles to always make your code easier to read, maintain, and extend. PLC programs are associated with a very real physical process and physical machinery. Often times what you find in the industry is that every plant/manufacturer/facility establishes their own set of best practices given their facilities needs and process.
To give an example:
Scenario 1:
The logic used to run the distillation process for a small local brewery may include sub-routines or even a loop. They may allow 5 or less warnings in their code, and allow a few unused tags. That is totally fine, because they are making beer, the process isn't critical, a bad batch won't kill anyone, and they only have 2 pumps that their using the logic to iterate over. So if there is a problem that needs trouble shooting the logic in the sub-routines or loop won't be too much of a headache.
Scenario 2:
I am a global pharmaceutical company producing 100's of millions of life critical drugs each year (say insulin). Now my logic is has zero sub-routines, no looping, I have zero tolerance for errors or warnings, and absolutely no unused tags. Why, because I am a highly regulated industry and if their is an issue with one of my products, people may die. Also why no sub-routines or looping, because I am a huge company with hundreds of pumps, mixers, etc... When one of those pieces of equipment go down I don't want to look at some horrible looping logic that is responsible for the logic of hundreds of pumps. I want to look at one select piece of the logic that I can quickly understand, correct, and get my line back up and operating.
I am sure you can find some articles or courses out there (like the one you already took) that explains some basic "best practices", but in the real world you will need to adapt your logic to every individual scenario in order to achieve the best outcome. That is my humble two cents on the matter, best of luck to you!
Udemy - there are some courses there, though I haven't tried them myself.
I've watched lots of useful videos on YouTube.
http://www.plcdev.com/siemens_simatic_step_7_programmers_handbook -
quite old, but could be usefull.
Siemens forums, official manuals, guides. There is lots of info there, quality varies sometimes, but mostly good.
BTW, a nice thing about Siemens is that you can often look up things just by searching the web. That is not the case for some other PLCs...
Good luck!
If you work already in a factory. Read the code that's run in PLC-s. And start modifying it, if needed. Thats how i started, I was initially lowly automation guy. Pulled cables, changed broken sensors etc.
If you don't, and you need a break to the field, then as ordinary tech worker, the path is usually from electrician or automation engineer. Or as entreprenuer/independent contractor, i have seen people just do it. Like win a contract for some public company request, do some schematics, write code, do electrical montage all by themselves. Or just do parts of it with other contractors. You need previous experience to pull it off
As for some practices:
If you are modifying existing code. Always use existing style, existing functions and blocks.
Do not use programming patterns from ordinary IT world in low PLC code. Or use with caution. Reason for this is that your code probably has to live for years and years, and has to be debuggable. Patterns usually add layers of complexity, complexity leads to harder debugging. In automation world it's usually better to debug stuff thats closer to hardware.
If you are starting to make project where you have tens or hundreds of sensors/motors/actuators, start using reusable blocks.
All best practices are learned in the field, sadly theres no other way. I know it's kind of catch22 sometimes. Need work to get experience, need experience to get work. I entered automation world, and later IT world the same way: get a job and the low end, maintanence guy or junior IT developer, gather experience, in a year or two you will be in mid-level.
And don't lose any of those constraints while your programming PLC :
PLC programming is very low level programming
memory size matters, each byte must be important
logical have to be concise and as short as possible : sometimes you have to be good in math !
the machine you're working on is dangerous and can provoque product, equipment or human damages
the machine you're working on is expensive and is built to produce for years
It's the same as in computer programming : each programmer has its own way to program, there's no truth. Sometime you'll find some interesting existing code : don't hesitate to re-use it if it looks smarter and is more efficient.
Find your way and keep in mind the machine you're working on is dangerous for you and the people walking around (it's not always the case but it's important to keep this in mind while programming).
And moreover: don't forget the first rule in industrial automation : if it runs correctly, don't touch it !
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I come from a computer science. background, but I am now doing genomics.
My projects include a lot of bioinformatics typically involving: aligning sequences, comparing overlap, etc. between sequences and various genome-annotation-features, from different classes of biological samples, time-course data, microarray, high-throughput sequencing ("next-generation" sequencing, though it's the current generation actually) data, this kind of stuff.
The workflow with this kind of analyses is quite different from what I experienced during my computer science studies: no UML and thoughtfully designed objects shining with sublime elegance, no version management, no proper documentation (often no documentation at all), no software engineering at all.
Instead, what everyone does in this field is hacking out one Perl-script or AWK-one-liner after the other, usually for one-time usage.
I think the reason is that the input data and formats change so fast, the questions need to be answered so soon (deadlines!), that there seems to be no time for project organization.
One example to illustrate this: Let's say you want to write a raytracer. You would probably put a lot of effort into the software engineering first. Then program it, finally in some highly-optimized form. Because you would use the raytracer countless of times with different input data and would make changes to the source code over a duration of years to come. So good software engineering is paramount when coding a serious raytracer from scratch. But imagine you want to write a raytracer, where you already know that you will use it to raytrace one, single picture ever. And that picture is of a reflecting sphere over a checkered floor. In this case you would just hack it together somehow. Bioinformatics is like the latter case only.
You end up with whole directory trees with the same information in different formats until you have reached the one particular format necessary for the next step, and dozen of files with names like "tmp_SNP_cancer_34521_unique_IDs_not_Chimp.csv" where you don't have the slightest idea one day later why you created this file and what it exactly is.
For a while I was using MySQL which helped, but now the speed in which new data is generated and changes formats is such that it is not possible to do proper database design.
I am aware of one single publication which deals with these issues (Noble, W. S. (2009, July). A quick guide to organizing computational biology projects. PLoS Comput Biol 5 (7), e1000424+). The author sums the goal up quite nicely:
The core guiding principle is simple:
Someone unfamiliar with your project
should be able to look at your
computer files and understand in
detail what you did and why.
Well, that's what I want, too! But I am following the same practices as that author already, and I feel it is absolutely insufficient.
Documenting each and every command you issue in Bash, commenting it with why exactly you did it, etc., is just tedious and error-prone. The steps during the workflow are just too fine-grained. Even if you do it, it can be still an extremely tedious task to figure out what each file was for, and at which point a particular workflow was interrupted, and for what reason, and where you continued.
(I am not using the word "workflow" in the sense of Taverna; by workflow I just mean the steps, commands and programs you choose to execute to reach a particular goal).
How do you organize your bioinformatics projects?
I'm a software specialist embedded in a team of research scientists, though in the earth sciences, not the life sciences. A lot of what you write is familiar to me.
One thing to bear in mind is that much of what you have learned in your studies is about engineering software for continued use. As you have observed a lot of what research scientists do is about one-off use and the engineered approach is not suitable. If you want to implement some aspects of good software engineering you are going to have to pick your battles carefully.
Before you start fighting any battles, you are going to have to critically examine your own ideas to ensure that what you learned in school about general-purpose software engineering is valid for your current situation. Don't assume that it is.
In my case the first battle I picked was the implementation of source code control. It wasn't hard to find examples of all the things that go wrong when you don't have version control in place:
some users had dozens of directories each with different versions of the 'same' code, and only the haziest idea of what most of them did that was unique, or why they were there;
some users had lost useful modifications by overwriting them and not being able to remember what they had done;
it was easy to find situations where people were working on what should have been the same program but were in fact developing incompatibly in different directions;
etc etc etc
Once I had gathered the information -- and make sure you keep good notes about who said what and what it cost them -- it became relatively easy to paint a picture of a better world with source code control.
Next, well, next you have to choose your own next battle. But one of the seeds of doubt you have to sow in your scientist-colleagues minds is 'reproducibility'. Scientific experiments are not valid if they are not reproducible; if their experiments involve software (and they always do) then careful software engineering is essential for reproducibility. A lot of this is about data provenance, but that's a topic for another day.
Part of the issue here is the distinction between documentation for software vs documentation for publication.
For software development (and research plan) design, the important documentation is structural and intentional. Thus, modeling the data, reasons why you are doing something, etc. I strongly recommend using the skills you've learned in CS for documenting your research plan. Having a plan for what you want to do gives you a lot of freedom to multi-task while long analyses are running.
On the other hand, a lot of bioinformatics work is analysis. Here, you need to treat documentation like a lab notebook, and not necessarily a project plan. You want to be document what you did, maybe a brief comment why (e.g. when you are troubleshooting data), and what the outputs and results are.
What I do is fairly simple.
First, I start in a directory and create a git repo. Then, whenever I change some file, I commit it to the repo. As much as possible, I try to name data outputs in a way that I can drop then into my git ignore files.
Then, as much as possible, I work on a single terminal session for a project at a time, and when I hit a pause point (like when I've got a set of jobs sent up to the grid, I run 'history |cut -c 8-' and paste that into a lab notes file. I then edit the file to add comments for what I did, and remember, change the git add/commit lines to git checkout (I have a script that does this based on the commit messages). As long as I start it in the right directory, and my external data doesn't go away, this means that I can recreate the entire process later.
For any even slightly complex processing tasks, I write a script to do it, so that my notebook, as much as possible, looks clean. To an approximation, a helper script can be viewed as a subroutine in a larger project, and should be documented internally to at least that level.
Your question is about project management. Bad project management is not unique to bioinformatics. I find it hard to believe that the entire industry of bioinformatics is commited to bad software design.
About the presure... Again there are others in this world that have very challenging deadlines, and they are still using good software designs.
In many cases, following a good software design does not hold down the projects and may even speed its design and maintainance (at least on the long run).
Now to your real question... You can offer your manager to redesign small parts of the code that have no influence on the rest of the code as a proof of concept (POC), but it's really hard to stop a truck from keep on moving, so don't get upset if he feels "we worked this way for years - we know what we are doing, and we don't need a child to teach us how to do our work". Learn to work like the rest and when you will gain their trust, you could
do your thing once in a while (I hope you will have time and the devotion to do the right thing).
Good luck.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm a new software architect/lead, coming up with software design for a team of software developers. I'm coming up with the requirement spec, interface header files, and visio software design docs, and build plan, etc.
My question is: what do the rest of the team do during this period? I'm certainly engaging them in the design, but we dont need the whole team actively working on what I'm doing all the time.
Are there any good books for new software architect?
Generally the various stages overlap, so there will be some coding during design etc. There are a lot of things to do besides that. They can be reviewing unfamiliar technology that is going to be used, setting up source control system, reviewing business requirements, reviewing your documents to make sure they make sense and are clear. There is a lot of other work to be done besides programming.
What a software team does while the lead does the design is very different from company to company. On my company we try to work on the design while the developers are finalizing other projects or solving bugs.
Another approach that I've taken when starting a whole new project is to get the developers to work on the design as well - people with a good understanding of the requirements can help you designing smaller parts of the system and writing the specs for them. Others can work on mockups, frameworks. This worked rather well for the small software team I led in a previous job (4 developers in total).
I also found it useful to have other team members research parts I'm unsure of (or even validating that things I think should work will indeed work), such as:
Investigating whether an external API provides the features we need
Writing a small proof of concept or technology demonstrator
Create an API mockup (header file, interface or REST endpoint) to investigate whether the API looks useful.
As other have said, you typically want a ramp-up period during the first part of the project, and through the first iteration. You're planning on building this iteratively, aren't you? Start with a core team (nor more than 3-4 people, since you're going to need to communicate heavily with each other) to help you explore the requirements, get a basic data model in place, identify and setup any frameworks, identify and setup build and test tools. Some coding activities typically take place in the design phase: for UI mockups, run-ahead prototypes of technically sensitive areas (whatever risks you have should be mitigated by explirative coding: be they new technologies, undocumented interfaces to integrated systems, or unstable requirements).
But coders in the design phase should help with the design, in order to get their buy-in, and to help train up the rest of the team during the first iterations. Your role during this is to ensure that the major nonfunctional requirements (e.g. are known, prioritized, are met by the design, and can be tested). You should also collaborate with the project lead or whoever else is responsible for staffing and financing in order to sketch out the iterations and the staffing levels needed. Ensure the solution can be built iteratively, and aim at implementing only a basic structure during the first iteration, both to build confidence, and to eliminate risks. (Sometimes, you can push major risks to the second iteration, and focus the first towards confidence and team building.)
And of course, be sure you are not designing every detail. You should be able to use every design artifact in the next iteration (and elaborate them later as needed). Since design decisions are expensive to change, try to postpone them. However, some influence the entire solution (for instance, the data model, or your approach to security) and absolutely must be at least outlined up front. This isn't waterfall. This is just not closing your eyes and hoping a viable architecture will emerge by magic.
But design proceeds throughout the iterations. It's just that you do less of it as you go along, and with lesser impact on the solution (unless you're unlucky... and then things get expensive).
Stop doing the useless things you do and just start coding with them! ;)
If there is no overlap with another ongoing project, getting them involved as you're doing is great, maybe push it a little further by having them prototype and present the plus and minus of alternative technologies (APIs, frameworks, libraries, etc...) that your project could use.
As a new software architect, I can recommend some books that helped me understand the role of the architect (but of course not to master it):
Fundamentals of Software Architecture An Engineering Approach:
This book gives good modern overview of software architecture and its many aspects, good place to start if you are a beginner or broaden your knowlage.
Software Architecture in Practice:
Explains what software architecture is, why it's important, and how to design, instantiate, analyze, evolve, and manage it in disciplined and effective ways.
Software Architect's Handbook:
This book takes you through all the important concepts, right from design principles to different considerations at various stages of your career in software architecture. It begins by covering the fundamentals, benefits, and purpose of software architecture.
Clean Architecture: A Craftsman's Guide to Software Structure and Design:
Learn what software architects need to achieve and how to achieve it, master essential software design principles and see how designs and architectures go wrong.
Software Architecture: The Hard Parts:
An advanced architecture book, with this book, you'll learn how to think critically about the trade-offs involved with distributed architectures.
Usually there's another project they can work on, but...
I have my team review the project specs/requirements and put together a basic/preliminary structure to get them already thinking through the application and working out specific questions.
When we convene at the table to discuss the plan they already have an idea of what the project is and requires and in some cases, they present questions I may have missed or overlooked.
Although it's too late now, a good way to approach it is to move the architect over before his current project has ended. Start freeing him up at like 25% then work your way up to 75-100% on the new project a month or two before it starts (maybe more depending on how much analysis and customer interaction there is).
On a trivial project (let's say 2 man-years) it might not be necessary, but anything bigger than that can end up in chaos if somebody doesn't at least get the analysis right before everybody jumps aboard.
If your team does not have any other projects to work on, ask experienced programmers of your your team to come up with at prototype so that you can create a requirement doc according to the needs of the client.
Also programmers novice to the technologies being used in the team could utilize this time to familiarize themselves with the technologies on which your team is going to develop the project.
architect != designer
Chances are that all of your developers can help with the design; let them. Architects don't have to be "lone wolves" and do everything themselves. You lay out the guidelines and the principles and the scaffolding, rough in the wiring, and let your developers flesh out the details - whether it is drawing Visio diagrams or building prototypes to mitigate unknowns/risks.
Migrate towards Agile/XP and away from waterfall methods, and you'll find the team a lot more help.
When making the general design, it's very handy to have programmers create proof-of-concepts. Do that especially with parts of the system that could end up being show stoppers if they don't work in the way you plan to do them, so you can think of alternatives, and adjust the design.
That's going to help you to make the right design-decisions before moving entirely into a certain direction.
Just doing a design, and then moving on and start coding is a sure way to mess up a project. You won't realize that your design is not feasible (or just plain sucks) until you're half-way coding, and by then it's too late to make radical changes.
You'll waste time mitigating non-existing problems during the design, and you'll run into unforeseen problems during implementation.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I've never written functional specs, I prefer to jump into the code and design things as I go. So far its worked fine, but for a recent personal project I'm writing out some specs which describe all the features of the product, and how it should 'work' without going into details of how it will be implemented, and I'm finding it very valuable.
What are your thoughts, do you write specs or do you just start coding and plan as you go, and which practice is better?
If you're driving from your home to the nearest grocery store, you probably don't need a map. But...
If you're driving to a place you've never been before in another state, you probably do.
If you're driving around at random for the fun of driving, you probably don't need a map. But...
If you're trying to get somewhere in the most effective fashion (minimize distance, minimize time, make three specific stops along the way, etc.) you probably do.
If you're driving by yourself and can take as long as you like, stopping any time you see something interesting or to reconsider your destination or route, you may not need a map. But...
If you're driving as part of a convoy, and all need to make food and overnight lodging stops together, and need to arrive together, you probably do.
If you think I'm not talking about programming, you probably don't need a functional spec, story cards, narrative, CRCs, etc. But...
If you think I am, you might want to consider at least one of the above.
;-)
For someone who "jumps into the code" and "design[s] as they go", I would say writing anything including a functional spec is better than your current methods. A great deal of time and effort can be saved if you take the time to think it through and design it before you even start.
Requirements help define what you need to make.
Design helps define what you are planning on making.
User Documentation defines what you did make.
You'll find that most places will have some variation of these three documents. The functional spec can be lumped into the design document.
I'd recommend reading Rapid Development if you're not convinced. You truely can get work done faster if you take more time to plan and design.
Jumping "straight to code" for large software projects would almost surely lead to failure (as immediatley starting posing bricks to build a bridge would).
The guys at 37 Signals would say that is better to write a short document on paper than writing a complex spec. I'd say that this could be true for mocking up quickly new websites (where the design and the idea could lead better than a rigid schema), but not always acceptable in other real life situations.
Just think of the (legal, even) importance a spec document signed by your customer can have.
The morale probably is: be flexible, and plan with functional or technical specs as much as you need, according to your project's scenario.
For one-off hacks and small utilities, don't bother.
But if you're writing a serious, large application, and have demanding customers and has to run for a long time, it's a MUST. Read Joel's great articles on the subject - they're a good start.
I do it both ways, but I've learned something from Test Driven Development...
If you go into coding with a roadmap you will get to the end of the trip a helluva lot faster than you will if you just start walking down the road without having any idea of how it is going to fork in the middle.
You don't have to write down every detail of what every function is going to do, but define you basics so that way you know what you should get done to make everything work well together.
All that being said, I needed to write a series of exception handlers yesterday and I just dove right in without trying to architect it out at all. Maybe I should reread my own advice ;)
What a lot of people don't want to admit or realize is that software development is an engineering discipline. A lot can be learned as to how they approach things. Mapping out what your going to do in an application isn't necessarily vital on small projects as it is normally easier to quickly go back and fix your mistakes. You don't see how much time is wasted compared to writing down what the system is going to do first.
In reality in large projects its almost necessary to have road map of how the system works and what it does. Call it a Functional Spec if you will, but normally you have to have something that can show you why step b follows step a. We all think we can think it up on the fly (I am definitely guilty of this too), but in reality it causes us problems. Think back and ask yourself how many times you encountered something and said to yourself "Man I wish I would have thought of that earlier?" Or someone else see's what you've done, and showed you that you could have take 3 steps to accomplish a task where you took 10.
Putting it down on paper really forces you to think about what your going to do. Once it's on paper it's not a nebulous thought anymore and then you can look at it and evaluate if what you were thinking really makes sense. Changing a one page document is easier than changing 5000 lines of code.
If you are working in an XP (or similar) environment, you'll use stories to guide development along with lots of unit and hallway useability testing (I've drunk the Kool-Aid, I guess).
However, there is one area where a spec is absolutely required: when coordinating with an external team. I had a project with a large insurance company where we needed to have an agreement on certain program behaviors, some aspects of database design and a number of file layouts. Without the spec, I was wide open to a creative interpretation of what we had promised. These were good people - I trusted them and liked working with them. But still, without that spec it would have been a death march. With the spec, I could always point out where they had deviated from the agreed-to layout or where they were asking for additional custom work ($$!). If working with a semi-antagonistic relationship, the spec can save you from even worse: a lawsuit.
Oh yes, and I agree with Kieveli: "jumping right to code" is almost never a good idea.
I would say it totally "depends" on the type of problem. I tend to ask myself am I writing it for the sake of it or for the layers above you. I also had debated this and my personal experience says, you should since it keeps the project on track with the expectations (rather than going off course).
I like to decompose any non trivial problems loosely on paper first, rather than jumping in to code, for a number of reasons;
The stuff i write on paper doesn't have to compile or make any sense to a computer
I can work at arbitrary levels of abstraction on paper
I can add pictures and diagrams really easily
I can think through and debug a concept very quickly
If the problem I'm dealing with is likely to involve either a significant amount of time, or a number of other people, I'll write it up as an outline functional spec. If I'm being paid by someone else to develop the software, and there is any potential for ambiguity, I will add enough extra detail to remove this ambiguity. I also like to use this documentation as a starting point for developing automated test cases, once the software has been written.
Put another way, I write enough of a functional specification to properly understand the software I am writing myself, and to resolve any possibile ambiguities for anyone else involved.
I rarely feel the need for a functional spec. OTOH I always have the user responsible for the feature a phone call away, so I can always query them for functional requirements as I go.
To me a functional spec is more of a political tool than technical. I guess once you have a spec you can always blame the spec if you later discover problems with the implementation. But who to blame is really of no interest to me, the problem will still be there even if you find a scapegoat, better then to revisit the implementation and try to do it right.
It's virtually impossible to write a good spec, because you really don't know enough of either the problem or the tools or future changes in the environment to do it right.
Thus I think it's much more important to adapt an agile approach to development and dedicate enough resources and time to revisit and refactor as you go.
It's important not to write them: There's Nothing Functional about a Functional Spec
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We don't teach children calculus first. We first teach them arithmetic, then algebra, then geometry, the analytical geometry, then finally calculus.
Why then, do we teach our computer scientists frameworks and IDE first. Some curriculum do force students to learn computer science fundamentals, but the vast majority of graduates that I see could not compose a framework of their own to save their lives.
Where then is the next generation of tool builders?
How can we promote the understanding necessary to create frameworks and development environments?
This is of course a generality. Not all education is lacking, but it seems to be the majority and it brings down the quality of our profession as a whole.
I think the analogy is a bit off. A better analogy would be "We don't teach our kids to use calculators to add and subtract, why teach programmers to use an IDE to program?"
Get rid of HR departments that require X years experience in Y. The universities are just tailoring their course to the HR department's requirements.
I employ graduates who can code in something (I really don't care what language) and who can learn.
I see your point, although I think the math analogy doesn't quite fit. You have to know basic arithmetic to be able to get anything done in any other math discipline.
When I began programming frameworks were mostly unheard of. If you wanted a binary tree, by God, you went and wrote one. In C or Assembler. That was basically it, so to get anything done at all you had to know a lot.
Today, Frameworks and IDEs and designers make it possible for "noobs" to create actually pretty brilliant things without knowing the first thing about how to build a framework, or a compiler, or manage memory allocation.
The real issue is, what about all the dingbats that think they are awesome, great programmers because they used Frontpage or Access? Managers have a hard time telling the difference between that kind of programmer and one that really knows software development as a discipline.
So, specifically, why is it that way? Because everyone wants a job and nobody hires programmers that know how to build a binary tree. They want programmers that know .Net or J2EE, etc.
I would argue that there is probably enough work out there for 9 to 5 programmers who can start at the framework level and go up from there. The truly good ones - mostly your program as a career and/or program as a hobby - are going to get the knowledge they may have missed in college over time anyway. You can't force everyone to be a wonderful programmer no matter what curriculum you teach. Inquisitive students are going to learn about the fundamentals whether its taught to them in class or entirely on their own.
There are tool makers and tool breakers. And of course there are tools, but let's not go there.
If you have a good look at an automotive workshop, you will see a lot of funny little tools that you don't see on the shelves in hardware stores. Like the ones for pushing back brake caliper pistons. Or the clamps for compressing valve stems so you can get the collets out with one hand while talking to your mates about nailing the new secretary (instead of watching them fly across the room when the spring slips out from your screwdriver).
These were designed by mechanics. They're really effective, generally small and cheap, and totally incomprehensible until you seen them in action.
Most of the profound changes in automotive technology were bottom-up, but top-down is also needed. Individual mechanics can't make fundamental technology changes like the switch from cast iron to alloy heads. A new broom sweeps clean, an old broom knows the corners. You need both.
But I digress: the point is that the mechanics couldn't design these tools if they lacked fundamental skills and knowledge. My father built me an entire motorcycle from scrap iron when I was a kid. As an adult, because I lack his skills and knowledge and modes of thought, I can barely maintain the bike I bought from Honda, much less take to it with an oxy like Mr T in a creative frenzy.
With code, I am as my father was with steel. Donald Knuth is my constant companion, and when the wireless protocol for our GPS loggers needs to be implemented in .NET it's me they come to see. The widget monkeys wouldn't know where to start.
I think the problem is in fact the GUI paradigm in general.
Microsoft made using computers much easier, they popularized the Graphical User Interface. They brought this interface metaphor, (the desktop, the file) to the domain of programming as well and very effectively too with their Visual Basic tool.
But just as the GUI obscures what happens "under the hood" so does the IDE obscure the manipulation of bits and bytes. The question is, of course, risk to reward ratio - how much understanding do programmers lose in exchange for productivity?
A cursory look at "The Art of Computer Programming" might show why IDEs are useful; "The ultimate packing density is achieved when we have 1-bit items, because we can cram 64 of them into a single 64-bit word. Suppose, for example, that we want a table of all odd prime numbers less than 1024, so that we can easily decide the primality of a small integer. No problem; only eight 64-bit numbers are required:
p0 = 011101101101001100101101001001001100101100101001000101101101000000
p1 = . . ."
Programming is really hard, you can see how an IDE might help. :^)
Learning the abstraction is easier than learning the details when it comes to programming. It's harder to teach someone to hand-code assembler to print "Hello World" than it is to have them throw together a form with a button on it that shows a "Hello World" message when the button is clicked.
You didn't know how to build the engine of a car before learning to drive, did you? Because it's not necessary in order to drive. In the same vein, you don't need to learn how a linked list or binary tree works in order to maintain a list of names and search them.
There will always be those who want to get under the hood and learn the "why" of things, but I don't think it's required to get things done.
I always screen applications by asking difficult questions that they could only answer if they understood how something really works. I think it is a real shame colleges and universities are teaching people framework based development but not focusing on core software principles. I agree that what matters more than anything is someone who understands how programming works and has the drive to learn anything they can about it.
Most universities I know of have an introduction to computer programming course that teaches basic programming concepts. Unfortunately it is impossible to teach programming without actually writing code.
The problem is that some prefer to teach this course using some OO language such as JAVA or C# and so the students must use Visual Studio (or the Java equivalent).
It is very hard to explain the basic concepts when the IDE forces you to work in a certain way.
I think that the first language students learn should be functional language such as C. This way you have less layers of abstraction between them and the basic CS concepts.
Agree with cfeduke.
I looked at the work for the same CS courses I did from 2 years previously, and they were way harder. 5 years previously, way way harder.
The CS bar is being lowered more and more, presumably because there are more and more jobs that don't require any working knowledge of any of the complicated CS subjects. There are huge numbers of jobs for people to just cut code.
Since traditionaly people who wanted to be programmers did CS courses as coding has gotten easier this is still the case.
What really needs to happen is for CS to not be a requirement for professional software development. Instead there needs to be another curriculam that focuses more on getting people out the door and cutting code.
This would leave CS to be that course for you next generation of tool builder.