How to do the unit testing on stm32 programming [closed] - stm32

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have STM 32 nucleo board .I have done a program for an embedded functionality. Currently I want to do the unit testing for the functionalities including the peripherals like ADC,SPI,UART..Can anyone suggest the Unit testing framework to test the functionality for the same.

There are two common strategies that I'm aware of:
Make sure you have a good SW component architecture, including a well-defined interface towards HAL/driver components that only take care of access to HW port registers.
Devise a separate verification method for low-level HW-access components, which avoids unit-testing tooling.
To establish such a verification concept, make sure
to avoid any processing logic in HAL/driver components as far as possible.
to avoid any HW dependencies and any non-portable code in the remaining parts of your SW.
Now you can take the HW-free parts of your code to some PC architecture (x86 or whatever) and do all your unit testing with conventional UT tools.
Of course, your UT will miss any errors that are related to the differences between STM32 and your PC.
Note that usually this is not too big a problem because HW-specific features have been "pushed" into the HAL, and in the other parts of the code, you are only looking for logical errors that have nothing to do with the HW/architecture.
I know that this part of the answer talks round your subject instead of providing the solution you were asking for.
Nevertheless, this is what many projects do.
(Develop yourself an STM32 integration for your unit-testing framework, or) buy a commercial unit-testing tool that supports your HW architecture.
STM32 is very popular nowadays, so many commercial tools (Tessy, VectorCast etc.) provide integration that may fit your system.
Note that you also have to select your build toolchain (and possibly the debug adapter) in line with this tool selection, so you have to check some additional constraints.
To get some orientation for your decision between those two strategies, you should consider the question which purpose you are carrying out the unit testing for:
Which kinds of errors are you going to find this way? Which other verification methods are you already applying? Which robustness does your application have to achieve/guarantee - how critical is your application?
Usually, a certain kind/level of criticality corresponds to a certain amount of efforts in order to reduce the probability of residual errors in your software/system as low as reasonably practicable (ALARP). You should make sure to "spend" those efforts in a balanced way so that you find as many errors as possible with every added verification. Unit-testing of low-level code is more tedious and often finds less mistakes (per test case or per working hour) than other sorts of testing.
Do you have to prove a certain level of UT coverage in order to fulfill process criteria like safety standars (e. g., IEC 61508-3 and derived standards), internal company standards, etc.?
Then you have to check what those standards demand for your application, and discuss your strategy with the corresponding assessors.
If such requirements apply to your situation, you will have to convince the assessors that the verification measures applied are sufficient. For professional efficiency, it is crucial that you make up your mind (and start the discussion) which strategy to follow at the beginning of the project.
Edit: I just came across a good discussion of this topic at SoftwareEngineeringStackExchange.
Have a look there.

I suggest Unity the test framework is just a few header files to add to the project. You can augment this with Cmock and Ceedling to build your unit tests on a host machine. You can also use platformio to gather integration test results as shown here. I have not yet figured out if Cmock works on an stm32 board but I may try in the near future.
I have unit tested STM32 code with cmock and ceedling (on Linux) and then some integration tests on target.

Related

Siemens PLC programming best practices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
My question is pretty simple. Is there any useful place for learning to work with Siemens PLCs?
Full Disclosure:
I was a Software Engineer for Rockwell Automation working with their A|B PLCs
You probably won't like my answer
To put it plainly programming PLCs whether you're using Ladder Logic, Structure Text, Instruction List, Sequential Flow Chart, FBD, or Continuous Flow Chart isn't the same as programming software in a language like C++, Java, JavaScript, etc...
Simply put, there is not one set of "best practices" that fits every use case. The reason for that is, because unlike your standard software development which you can apply principles like the SOLID principles to always make your code easier to read, maintain, and extend. PLC programs are associated with a very real physical process and physical machinery. Often times what you find in the industry is that every plant/manufacturer/facility establishes their own set of best practices given their facilities needs and process.
To give an example:
Scenario 1:
The logic used to run the distillation process for a small local brewery may include sub-routines or even a loop. They may allow 5 or less warnings in their code, and allow a few unused tags. That is totally fine, because they are making beer, the process isn't critical, a bad batch won't kill anyone, and they only have 2 pumps that their using the logic to iterate over. So if there is a problem that needs trouble shooting the logic in the sub-routines or loop won't be too much of a headache.
Scenario 2:
I am a global pharmaceutical company producing 100's of millions of life critical drugs each year (say insulin). Now my logic is has zero sub-routines, no looping, I have zero tolerance for errors or warnings, and absolutely no unused tags. Why, because I am a highly regulated industry and if their is an issue with one of my products, people may die. Also why no sub-routines or looping, because I am a huge company with hundreds of pumps, mixers, etc... When one of those pieces of equipment go down I don't want to look at some horrible looping logic that is responsible for the logic of hundreds of pumps. I want to look at one select piece of the logic that I can quickly understand, correct, and get my line back up and operating.
I am sure you can find some articles or courses out there (like the one you already took) that explains some basic "best practices", but in the real world you will need to adapt your logic to every individual scenario in order to achieve the best outcome. That is my humble two cents on the matter, best of luck to you!
Udemy - there are some courses there, though I haven't tried them myself.
I've watched lots of useful videos on YouTube.
http://www.plcdev.com/siemens_simatic_step_7_programmers_handbook -
quite old, but could be usefull.
Siemens forums, official manuals, guides. There is lots of info there, quality varies sometimes, but mostly good.
BTW, a nice thing about Siemens is that you can often look up things just by searching the web. That is not the case for some other PLCs...
Good luck!
If you work already in a factory. Read the code that's run in PLC-s. And start modifying it, if needed. Thats how i started, I was initially lowly automation guy. Pulled cables, changed broken sensors etc.
If you don't, and you need a break to the field, then as ordinary tech worker, the path is usually from electrician or automation engineer. Or as entreprenuer/independent contractor, i have seen people just do it. Like win a contract for some public company request, do some schematics, write code, do electrical montage all by themselves. Or just do parts of it with other contractors. You need previous experience to pull it off
As for some practices:
If you are modifying existing code. Always use existing style, existing functions and blocks.
Do not use programming patterns from ordinary IT world in low PLC code. Or use with caution. Reason for this is that your code probably has to live for years and years, and has to be debuggable. Patterns usually add layers of complexity, complexity leads to harder debugging. In automation world it's usually better to debug stuff thats closer to hardware.
If you are starting to make project where you have tens or hundreds of sensors/motors/actuators, start using reusable blocks.
All best practices are learned in the field, sadly theres no other way. I know it's kind of catch22 sometimes. Need work to get experience, need experience to get work. I entered automation world, and later IT world the same way: get a job and the low end, maintanence guy or junior IT developer, gather experience, in a year or two you will be in mid-level.
And don't lose any of those constraints while your programming PLC :
PLC programming is very low level programming
memory size matters, each byte must be important
logical have to be concise and as short as possible : sometimes you have to be good in math !
the machine you're working on is dangerous and can provoque product, equipment or human damages
the machine you're working on is expensive and is built to produce for years
It's the same as in computer programming : each programmer has its own way to program, there's no truth. Sometime you'll find some interesting existing code : don't hesitate to re-use it if it looks smarter and is more efficient.
Find your way and keep in mind the machine you're working on is dangerous for you and the people walking around (it's not always the case but it's important to keep this in mind while programming).
And moreover: don't forget the first rule in industrial automation : if it runs correctly, don't touch it !

Best IDE for PLC ladder programming [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Recently I trying to learn Ladder Logic programming for PLCs, but I want to know if there is any IDE to create Ladder programs better that Step7 or cx-programmer? Is there any plugin for Visual Studio or Netbeans that I can use? Finally, is it better to work with PLCs under Linux or Microsoft Windows?
UPDATE 1 : After googling about this, I found out that Ladder programming is not depend on the PLC brand or its model, so I did not mentioned any brand in my question.
What is your goal? In almost all cases, your IDE is dictated by the PLC manufacturer, and your PLC brand is spec'd by the customer when they buy the machine. They spec the PLC because they need to have something that they can go online with for maintenance and troubleshooting. Since the software is proprietary and absurdly expensive, they don't want to get a new software license for every machine in the plant and have to relearn new software, while they are bleeding money of manufacturing downtime.
So if your goal is to enter the industry, you want to find out what plants in country tend to use. In North America it's usually Allen-Bradley a.k.a Rockwell Automation, which is programmed with RSLogix 5000 (edit: the recent versions of RSLogix 5000 have been rebranded as Studio 5000). In Europe, it's typically Seimens, but I have no experience with them.
PLC IDEs are almost always picked hardware first. With some obscure exceptions, you pick the hardware you want to run, and this determines the IDE. The IDEs are all proprietary and unique to each hardware platform. Rockwell Automation alone has three different IDEs to support their hardware lines, all licensed individually and very expensive.
If Omron is the most common in your area, it's a good idea to start with them. Once you get used to one type of PLC, learning more is really easy.
If you don't mind which PLC platform you're using, I really enjoyed my time with RSLogix. They have a free, training-level suite available here:
http://www.ab.com/linked/programmablecontrol/plc/micrologix/downloads.html
I prefer RSLogix 5000. It's one of the easiest to work with and has User Defined Types and Add On Instructions to help with reusability.
ABB has Control Builder (which is the product that I work on), AFAIK when somebody buys our AC800 controller they get the CB for free, at least the so called "Compact version" which is file based. The CB has ladder diagrams as well as all other 1131 languages plus some extensions like Function Diagrams.
Disclaimer Sorry if it sounded like an ad, just very passionate with what I work on.
I am primarily a high level language programmer, but have also done development on various PLC /PAC platforms, including Rockwell, Siemens, and Beckhoff.
If your goal is to merely get an introduction to ladder, nearly anything will due. You can download Beckhoffs TwinCAT software for free. It is only a 30-day license, but you can just continually reinstall every 30 days without issue. The great part of TwinCAT is that it runs on a Windows PC, so you can develop and test code directly on the PC and don't need actual Beckhoff hardware to play with. The ladder is a bit quirky, but the statement list portion is by far one of the more powerful. If you are a C-programmer you will feel very comfortable with Beckhoff, because they have duplicated a lot of C-like functions (e.g. memcpy and setcpy) into their libraries.
The Beckhoff platform is not all that widespread, but it would allow you to learn the principals of ladder and PLC/PAC programming.

Neural Network simulator in FPGA? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
To learn FPGA programming, I plan to code up a simple Neural Network in FPGA (since it's massively parallel; it's one of the few things where an FPGA implementation might have a chance of being faster than a CPU implementation).
Though I'm familiar with C programming (10+ years). I'm not so sure with FPGA development stuff. Can you provide a guided list of what I should do / learn / buy?
Thanks!
Necroposting, but for others like me that come across this question there is an in-depth, though old, treatment of implementing neural networks using FPGAs
It's been three years since I posted this, but it is still being viewed so I thought I'd add another two papers from last year I recently found.
The first talks about FPGA Acceleration of Convolutional Neural Networks. Nallatech performed the work. It's more marketing that an academic paper, but still an interesting read, and might be a jumping off point for someone interesting in experimenting. I am not connected to Nallatech in any way.
The second paper came out of the University of Birmingham, UK, written by Yufeng Hao. It presents A General Neural Network Hardware Architecture on FPGA.
Most attempts at building a 'literal' neural network on an FPGA hit the routing limits very quickly, you might get a few hundred cells before P&R pulls takes longer to finish than your problem is worth waiting for. Most of the research into NN & FPGA takes this approach, concentrating on a minimal 'node' implementation and suggesting scaling is now trivial.
The way to make a reasonably sized neural network actually work is to use the FPGA to build a dedicated neural-network number crunching machine. Get your initial node values in a memory chip, have a second memory chip for your next timestamp results, and a third area to store your connectivity weights. Pump the node values and connection data through using techniques to keep the memory buses saturated (order node loads by CAS line, read-ahead using pipelines). It will take a large number of passes over the previous dataset as you pair off weights with previous values, run them through DSP MAC units to evaluate the new node weights, then push out to the result memory area once all connections evaluated. Once you have a whole timestep finished, reverse the direction of flow so the next timestep writes back to the original storage area.
I want to point out a potential issue with implementing a Neural Network in FPGA. FPGAs have limited amount of routing resources. Unlike logic resources (flops, look-up tables, memories), routing resources are difficult to quantify. Maybe a simple Neural Network will work, but a "massively parallel" one with mesh interconnects might not.
I'd suggest starting with a simple core from OpenCores.org just to get familiar with FPGA flow, and then move on to prototyping a Neural Network. Downloading free Xilinx WebPack, which includes ISIM simulator, is a good start. Later on you can purchase a cheap dev. board with a small FPGA (e.g. Xilinx Spartan 3) to run your designs on.
A neural network may not be the best starting point for learning how to program an FPGA. I would initially try something simpler like a counter driving LEDs or a numeric display and build up from there. Sites that may be of use include:
http://www.fpga4fun.com/ - Excellent examples of simple projects and some boards.
http://opencores.org/ - Very useful reference code for many interfaces, etc...
You may also like to consider using a soft processor in the FPGA to help your transition from C to VHDL or Verilog. That would allow you to move small code modules from one to the other to see the differences in hardware. The choice of language is somewhat arbitrary - I code in VHDL (syntactically similar to ADA) most of the time, but some of my colleagues prefer Verilog (syntactically similar to C). We debate it once in a while but really it's personal choice.
As for the buyers / learners guide, you need:
Patience :) - The design cycle for FPGAs is significantly longer than for software due to the number of extra 'free parameters' in the build, so don't be surprised if it takes a while to get designs working exactly the way you want.
A development board - For learning, I would buy one from one of the three bigger FPGA vendors: Xilinx, Altera or Lattice. My preference is Xilinx at the moment but all three are good. For learning, don't buy one based on the higher-end parts - you don't need to when starting using FPGAs. For Xilinx, get one based on the Spartan series such as the SP601 (I have one myself). For Altera, buy a Cyclone one. The development boards will be significantly cheaper than those for the higher-end parts.
A programming cable - Most companies produce a USB programming cable with a special connector to program the devices on the board (often using JTAG). Some boards have the programming interface built in (such as the SP601 from Xilinx) so you don't need to spend extra money on it.
Build tools - There are many varieties of these but most of the big FPGA vendors provide a solution of their own. Bear in mind that the tools are only free for the smaller lower-performance FPGAs, for example the Xilinx ISE Webpack.
The software comprises stages with which you may not be familiar having come from the software world. The specifics of the tool flow are always changing, but any tool you use should be able to get from your code to your specific device. The last part of this design flow is normally provided by the FPGA vendor because it's hardware-specific and proprietary.
To give you a brief example, the software you need should take your VHDL and Verilog code and (this is the Xilinx version):
'Synthesise' it into constructs that match the building blocks available inside your particular FPGA.
'Translate & map' the design into the part.
'Place & route' the logic in the specific device so it meets your timing requirements (e.g. the clock speed you want the design to run at).
Regardless of what Charles Stewart says, Verilog is a fine place to start. It reminds me of C, just as VHDL reminds me of ADA. No one uses Occam in industry and it isn't common in universities.
For a Verilog book, I recommend these especially Verilog HDL. Verilog does parallel work trivially, unlike C.
To buy, get a relatively cheap Cyclone III eval board from [Altera] or Altera's 3 (e.g. this Cyclone III one with NIOS for $449 or this for $199) or Xilinx.
I'll give you yet a third recommendation: Use VHDL. Yes, on the surface it looks like ADA. While Verilog bears a passing resemblance to C. However, with Verilog you only get the types that come with it out of the box. With VHDL you can define your own new types which lets you program at a higher level (still RTL, of course). I'm pretty sure the Xilinx and Altera free tools support both VHDL and Verilog. "A Designers Guide to VHDL" by Ashenden is a good VHDL book.
VHDL has a standard fixed-point math package which can make NN implementation easier.
It's old, because I haven't thought much about FPGAs in nearly 20 years, and it uses a concurrent programming language that is rather obscure, but Page & Luk, 1991, Compiling Occam into FPGAs covers some crucial topics in a nice way, enough, I think, for your purposes. Two links for trying stuff out:
KRoC is an actively maintained, linux-based Occam compiler, which I know has an active user base.
Roger Peel has a logic synthesis page that has some documentation of his linux-based workflow from Occam code synthesis through to FPGA I/O.
Occam->FPGA isn't where the action is, but it may be a much better place to start than, say, Verilog.
I would recommend looking into xilinx high-level synthesis, especially if you are coming from a C background. It abstracts away the technical details in using a hdl so the designer can focus on the algorithmic implementation.
The are restriction in the type of C code you can write. For example, you can't use dynamically sized data structures, as that would infer dynamically sized hardware.

Should developer tools, languages, frameworks, etc. be standardized across an organization? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The organization that I currently work for seems to be heading in the direction of dictating to software developers which tools, languages, frameworks, etc. must be used. However, nobody has convinced me that this is a good thing. The main argument I have heard is that it will make training easier. But, after developing software for over 10 years, I've never relied on training to learn how to use an IDE, programming language, or anything else; so I just can't relate.
With the rapid speed at which technology evolves, and the s-l-o-w-n-e-s-s at which I know the standards will adapt, I am concerned that my customers will have requirements that I won't be able to easily implement or won't be able to implement as efficiently as I should. For example, if there is a UI requirement for an auto-complete feature in a web app, and no API has been approved for this yet, I would need to implement auto-complete myself as opposed to using one of the many APIs that provide it out of the box.
A more radical example is if my customers wanted to have Google Wave features. In that case I would want the flexibility of configuring my development environment (including the IDE) and selecting appropriate frameworks (ex: GWT) to use.
Please provide feedback on whether or not you think that software developer tools, languages, etc should be standardized and a few points to support your argument.
There is a lot of benefit for standardization. My organization has fairly set standards on what technology we will use. We realize strong benefits in the following areas ...
Hiring. It is easy to describe what technologies we are looking for and make sure our recruiters are looking for the right people.
License/Software costs. I can buy enterprise licenses easily. It gives me the opportunity to keep costs down by letting me spend more with a smaller number of vendors and thus get more leverage.
Consistency of delivery. Our teams have a very good idea of what projects will take to build, rollout and maintain because they have done it with success before (and they know the pitfalls too).
Agility. I can have one team take over for another or one individual take over for another more easily because of standardization.
Quality. We have peer reviews across teams as well as QA across teams.
Without a consistent use of a technology stack, tools, languages and frameworks, these types of benefits would be more difficult to realize. I am not closed off to new technologies, but there has to be a concrete reason beyond "what if I want to ..."
A major issue with standardization is that once standards are out there, they get stamped in concrete and are difficult to change. This is why our corporate IT environment is stuck on IE 6, and the best change control system we have access to is CVS. Given this situation, some developers break the rules, and some find jobs at more innovative companies.
You have a mixed bag here.
I wouldn't standardize on IDEs, because every developer works differently. Those who are insanely proficient in emacs may see their performance suffer if forced to use Visual Studio. I optimize my Visual Studio experience with a 30" monitor and find it incredibly productive.
However, standardizing on some tools, such as SCons or make or something to build products is perfectly reasonable.
Banning some libraries and having a process where new libraries are either approved or not is also very reasonable. I know lots of companies that ban boost, or JQuery, or ban open source libraries in general, etc. And they had good reasons for doing it. I know I got fairly upset when an intern incorporated some random "security" library he found on the internet without running it by anyone.
In the end every company is different. You have to be standardized enough to avoid serious complications and issues as people come and go, or as new products are formed and organizational structures change. But you have to be flexible enough to avoid re-inventing every wheel you need.
The important thing is to have clear reasons for adopting a certain tool or banning some other tool or library. You can't just have management dictate that thou shalt use this and not that without consulting the engineering team and making the decision for good reasons. And once decisions are made those reasons should be written down and clearly communicated.
And also, if, in the end, your favorite tool or library isn't adopted, please don't whine about it. Be adaptable and do your job, or find a new one that makes you happier.
I once worked for a manager who felt the need to innovate at every level of his software development operation. Every development tool had to be cutting edge (preferably in beta). Many of the tools he asked us to use didn't have good documentation, and training was not available. Ultimately, most of the technology we tried simply didn't work. We wasted a lot of time churning through new technologies, only to dump them when it became clear we couldn't make progress.
I tried to make the case that innovation is perfect in the area where your value proposition lies. Innovation can also be used judiciously where standard techniques fail. But for most mundane tasks, using tried-and-true tools and methods should be the default. Less risk, less cost, less management attention needed. So you can focus time and energy on the areas where innovation has the most benefit.
So I think standardization has an important role. But blindly saying everything must be standard is just as sure to fail as my manager who thought everything must be innovative.
The number one argument in favor of standardization is that it maximizes the ability of the organization as a whole to use a common body of knowledge. Don't know how custom web controls are built in ASP.NET/C#? Ask Bill down the hall who has the knowledge. If you use different tools, such organizational wisdom is cut off at the knees. While it is not good to be restricted to a least common denominator (and hopefully your management will realize this) you should not overlook the benefits of shared experience!
UPDATE: I do not agree that innovation and standardization are polar opposites. Indeed, would we have nearly the level of web innovation if we still had the mishmash of networking standards characteristic of the 1980s? No we would not. Of course, we might have more innovation on new low-level networking protocols but is that really worth it? In its place, we've had an explosion of creativity within the bounds of TCP/IP and the Web standards (http, html, etc.)
The trick is knowing how to standardize without using it as an argument for closing down all new exploration. For example, we use only ASP.NET/C#/SQL Server in my company but I'm perfectly open to the use of new tools within this framework (we recently adopted the DevExpress reporting package, for example, supplanting the earlier standard).
Standardization is a must for a productive development team. However that doesn't mean that you can't revist the standards from time to time to adjust them to new technologies and trends.
Whether you develop operations software for internal clients, or products for external clients, there is no compelling reason not to standardize. You certainly did not give one.
Had you seen how companies are struggling with holding heterogenous products together that have been maintained for 10 years or more, and are now a conglomerate of various technologies that developers at some point thought made sense, you would not have asked this question.
From the top of my head, I could name at least 2 well-known software companies that will be driven out of business because their cost of maintenance has become so high that they can no longer compete (but I won't).
I think the misconception here is that suppressing individualism would supress innovation. That is simply not true. It is poor technical leadership that suppresses innovation.
One unpleasant consequence of standardization is that it tends to stifle innovation.
Innovation is scary. It involves cost and risk.
Standardization is not scary. It reduces cost and risk in the short term. Until your competitors have created a game-changing innovation. Then standardization is very costly.
It depends on the organization I think. One like Microsoft, yes, there should be a bit of a standard. A small business with one IT department, no. A larger business with several offices around the world ... maybe.
it all depends :-P
Assuming the organization has a broad suite of enterprise applications to manage, I'd say no for the following reasons, though I may be taking the message of everything being the same a bit too literally:
Compromise on using best-of-breed for systems, e.g. if all the databases are to be MS-SQL then any Oracle DB solution is thrown out. This would also apply to the fact that everyone using an IDE has to use the same one whether they be doing Data Warehouse report development, web applications, console applications or winForms. I'm thinking of systems like ERP, CRM, SCM, CMS, SSO and various other TLAs, FLAs, and SLAs. (LA = Letter acronyms for a decoding hint if you need it)
Upgrading by committee is another interesting issue. Where if each team can choose their tools and have one person that decides it is to upgrade things, e.g. start using Visual Studio 2008 instead of Visual Studio 2005, now have to determine at what threshold is it worth it to upgrade everyone simultaneously which may be a big headache if there are more than a few developers. For example, over the past 10 years when would there be IDE changes, framework changes, etc.?
Exceptions to the standards. Could a contractor bring in something not used in the organization if they believe it helps them build better software, e.g. Resharper or other add-ons that some contractors believe are very worthwhile that the organization doesn't want to spend the money to get? What about legacy systems that may make the standard become a bit unwieldy, e.g. this was built in ASP.Net 1.1 and so everyone has to have VS 2003 installed even if most will never use it?
Just my thoughts on this.
There are several good reasons to standardize.
First, it allows the enterprise better organizational flexibility, if everybody is more or less familiar with the same things. It also allows people to help each other better. I can't help with problems in the ASP.NET stuff, and there's not all that many people who can help me on the C++ side.
Second, it reduces support problems and expenses. Oracle and SQL Server are both decent products, but using both for similar functions is only going to cause problems. Not to mention that I've been in shops using several widely different platforms to do similar things, and it wasn't fun.
Third, there are some things that just have to be standardized. We couldn't operate half with VS 2005 and VS 2008, since we keep project files under source control. We had to pick a time and convert over.
Fourth, in some businesses, it simplifies the regulatory problems. I don't know what business you're in. I work at a place where we can get away with making mistakes right now, but I've also contracted at a bank and a utility, where it's necessary to be able to show auditors that everything is going in a standard way.
Fifth, it can simplify procurement, if you're dealing with software that costs money.
This doesn't particularly limit us, since if there's something we need that isn't standardized on we just go ahead and get it or do it.
If you want to make a business case against standardization, you'll need to have a business-related argument. Your argument seems to be that you won't be able to implement features the user wants, and that is a consideration. Got another argument?
There's nothing wrong with standardizing on an IDE that is rich enough to be configured for individual developers.
However, do make sure that you don't prevent individual developers from using additional tools, as long as the tools are licensed and that the use of the tool by one developer doesn't require all other developers to use it.
For instance, I happen to use NORMA to help me design databases. The output is SQL Server DDL (or anything else I want). I can make the DDL part of the project without making my NORMA source part of it. Later developers do not need to use NORMA to work on the project.
On the other hand, if I decided to use the Configuration Section Designer to create configuration sections, then future developers would also have to use it. A decision would need to be made about whether to use that tool.
The company I work for uses C#, ASP.NET, JavaScript and generates HTML. The advantages over and above those mentioned above are that there is a perception of improved velocity for maintenance and adaptive changes. The disadvantages include generating some boredom for people who are technically savvy (geeky) and prefer to use a mix and match of languages, depending on what they fancy is better suited, or for 'performance reasons'.
Technical and personal supervision is always good to have when you are developing as fast as you can to meet tight deadlines and competing in a highly saturated market for web development.

Software design period...what do other developers do? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm a new software architect/lead, coming up with software design for a team of software developers. I'm coming up with the requirement spec, interface header files, and visio software design docs, and build plan, etc.
My question is: what do the rest of the team do during this period? I'm certainly engaging them in the design, but we dont need the whole team actively working on what I'm doing all the time.
Are there any good books for new software architect?
Generally the various stages overlap, so there will be some coding during design etc. There are a lot of things to do besides that. They can be reviewing unfamiliar technology that is going to be used, setting up source control system, reviewing business requirements, reviewing your documents to make sure they make sense and are clear. There is a lot of other work to be done besides programming.
What a software team does while the lead does the design is very different from company to company. On my company we try to work on the design while the developers are finalizing other projects or solving bugs.
Another approach that I've taken when starting a whole new project is to get the developers to work on the design as well - people with a good understanding of the requirements can help you designing smaller parts of the system and writing the specs for them. Others can work on mockups, frameworks. This worked rather well for the small software team I led in a previous job (4 developers in total).
I also found it useful to have other team members research parts I'm unsure of (or even validating that things I think should work will indeed work), such as:
Investigating whether an external API provides the features we need
Writing a small proof of concept or technology demonstrator
Create an API mockup (header file, interface or REST endpoint) to investigate whether the API looks useful.
As other have said, you typically want a ramp-up period during the first part of the project, and through the first iteration. You're planning on building this iteratively, aren't you? Start with a core team (nor more than 3-4 people, since you're going to need to communicate heavily with each other) to help you explore the requirements, get a basic data model in place, identify and setup any frameworks, identify and setup build and test tools. Some coding activities typically take place in the design phase: for UI mockups, run-ahead prototypes of technically sensitive areas (whatever risks you have should be mitigated by explirative coding: be they new technologies, undocumented interfaces to integrated systems, or unstable requirements).
But coders in the design phase should help with the design, in order to get their buy-in, and to help train up the rest of the team during the first iterations. Your role during this is to ensure that the major nonfunctional requirements (e.g. are known, prioritized, are met by the design, and can be tested). You should also collaborate with the project lead or whoever else is responsible for staffing and financing in order to sketch out the iterations and the staffing levels needed. Ensure the solution can be built iteratively, and aim at implementing only a basic structure during the first iteration, both to build confidence, and to eliminate risks. (Sometimes, you can push major risks to the second iteration, and focus the first towards confidence and team building.)
And of course, be sure you are not designing every detail. You should be able to use every design artifact in the next iteration (and elaborate them later as needed). Since design decisions are expensive to change, try to postpone them. However, some influence the entire solution (for instance, the data model, or your approach to security) and absolutely must be at least outlined up front. This isn't waterfall. This is just not closing your eyes and hoping a viable architecture will emerge by magic.
But design proceeds throughout the iterations. It's just that you do less of it as you go along, and with lesser impact on the solution (unless you're unlucky... and then things get expensive).
Stop doing the useless things you do and just start coding with them! ;)
If there is no overlap with another ongoing project, getting them involved as you're doing is great, maybe push it a little further by having them prototype and present the plus and minus of alternative technologies (APIs, frameworks, libraries, etc...) that your project could use.
As a new software architect, I can recommend some books that helped me understand the role of the architect (but of course not to master it):
Fundamentals of Software Architecture An Engineering Approach:
This book gives good modern overview of software architecture and its many aspects, good place to start if you are a beginner or broaden your knowlage.
Software Architecture in Practice:
Explains what software architecture is, why it's important, and how to design, instantiate, analyze, evolve, and manage it in disciplined and effective ways.
Software Architect's Handbook:
This book takes you through all the important concepts, right from design principles to different considerations at various stages of your career in software architecture. It begins by covering the fundamentals, benefits, and purpose of software architecture.
Clean Architecture: A Craftsman's Guide to Software Structure and Design:
Learn what software architects need to achieve and how to achieve it, master essential software design principles and see how designs and architectures go wrong.
Software Architecture: The Hard Parts:
An advanced architecture book, with this book, you'll learn how to think critically about the trade-offs involved with distributed architectures.
Usually there's another project they can work on, but...
I have my team review the project specs/requirements and put together a basic/preliminary structure to get them already thinking through the application and working out specific questions.
When we convene at the table to discuss the plan they already have an idea of what the project is and requires and in some cases, they present questions I may have missed or overlooked.
Although it's too late now, a good way to approach it is to move the architect over before his current project has ended. Start freeing him up at like 25% then work your way up to 75-100% on the new project a month or two before it starts (maybe more depending on how much analysis and customer interaction there is).
On a trivial project (let's say 2 man-years) it might not be necessary, but anything bigger than that can end up in chaos if somebody doesn't at least get the analysis right before everybody jumps aboard.
If your team does not have any other projects to work on, ask experienced programmers of your your team to come up with at prototype so that you can create a requirement doc according to the needs of the client.
Also programmers novice to the technologies being used in the team could utilize this time to familiarize themselves with the technologies on which your team is going to develop the project.
architect != designer
Chances are that all of your developers can help with the design; let them. Architects don't have to be "lone wolves" and do everything themselves. You lay out the guidelines and the principles and the scaffolding, rough in the wiring, and let your developers flesh out the details - whether it is drawing Visio diagrams or building prototypes to mitigate unknowns/risks.
Migrate towards Agile/XP and away from waterfall methods, and you'll find the team a lot more help.
When making the general design, it's very handy to have programmers create proof-of-concepts. Do that especially with parts of the system that could end up being show stoppers if they don't work in the way you plan to do them, so you can think of alternatives, and adjust the design.
That's going to help you to make the right design-decisions before moving entirely into a certain direction.
Just doing a design, and then moving on and start coding is a sure way to mess up a project. You won't realize that your design is not feasible (or just plain sucks) until you're half-way coding, and by then it's too late to make radical changes.
You'll waste time mitigating non-existing problems during the design, and you'll run into unforeseen problems during implementation.