We have a user experience designer in our team who has no programming background. He is expected to design screens within Eclipse as a development environment. His (valid) complaint is that every time he designs a specific screen and gives it to development - they tell him what is not possible technically using either SWT or GEF. So, he wants me to teach him basics of SWT/GEF so that he can make informed decisions and maybe even try out certain things in eclipse (as opposed to using Photoshop) before proposing designs to save time.
My personal belief is that design should not be constrained by technical possibilities and in theory, everything that the designer dreams of (at least the practical things) should be possible technically - albeit with workarounds or a little hacking.
So, my question is this - how important do you think is programming knowledge for user interface design? And if it is, how do you go about teaching someone with absolutely no programming experience the graphical frameworks on various platforms?
In principle, I agree with you. Programming knowledge shouldn't be necessary to be a skilled designer of UIs and work flows. However, knowing the abilities and limitations of the technology in use can help a UI designer work more effectively with the programming staff.
Where programming knowledge can help is if the development staff is blowing smoke that something cannot be done when it can be, some knowledge of the tools being used can help refute that. If the development staff is correct that something cannot be done, then knowledge of the tools can help the UI designer find an appropriate solution that meets the design goals and is achievable.
With a properly cooperative development staff, the UI designer would need very little (if any) knowledge about the specific GUI tools being used.
I've been on the developer side of this where I was being asked to do something impossible or impractical. I always worked with the designers to find a happy middle that met the design goals. Sometimes what I thought was impossible was in fact possible. Sometimes we had to do things a different way. A few things had to be put off as "possible, but too much effort." (Such as an SWT based application that became a Windows task bar. Definitely possible, but impractical for the project in question as it would require native code.)
What is most important is that both sides realize that they are on the same team.
Its very important..
Not knowing about:
technology in general
the technology you have chosen to dedicate your time investing into to produce the end product
Will result in a complete waste of time for everyone..
Even the end user needs to learn a bit about the technology employed in able to use whatever product we make..
Someone who drives a car will always need to know how to fill in the gas and know the basics of what a car is and what it can do, software works in a similar fashion.
Its like asking someone who doesn't know that cars (the ones of today) need wheels to make a drawing of your next release model.
The way to make them more aware of the technology is:
Show him/her similar products to the ones you should be making
Show him/her stand alone implementations of the building blocks you might consider using
But by all means...this doesn't mean you should stifle their creativity..have them draw away what they dream, just make them that little bit aware of reality as needs be in order to have something done in this lifetime
So, my question is this - how important do you think is
programming knowledge for user
interface design?
I think a basic knowledge of the standard user interface for the platform is required (text fields, combo boxes, radio buttons, etc). A good designer should be familiar with the capabilities and limitations of these GUI components, from a developer's point of view. So I guess some basic programming knowledge would be useful.
My personal belief is that design
should not be constrained by technical
possibilities and in theory,
everything that the designer dreams of
(at least the practical things) should
be possible technically - albeit with
workarounds or a little hacking.
I think there are important qualifications here --- each OS has guidelines on what constitutes good GUI design, and it's beneficial for your product that you follow them because the user has a certain mental model of how he or she should interact with applications on that platform. (Having said that, there may be good reasons for breaking some design conventions, e.g. in games, specialized graphics/music applications.)
how do you go about teaching someone
with absolutely no programming
experience the graphical frameworks on
various platforms?
Each toolkit makes available a whole bunch of small sample programs to demonstrate the use of different components --- this is probably a good first step to acquaint oneself with them.
Is not as important as common sense in my opinion.
It helps of course. But if the designer is asking for something that could be done ( because some other application uses it ) the development lead should at least present a workaround.
Programming knowledge probably not, but limitations on the chosen platform certainly.
I think it's better to learn up front, but if your UI designer is forced to learn on the fly, make sure that each time he is turned away, it's explained why something can't be done rather than just a flat refusal. This will keep him from getting as frustrated as he might otherwise be because he'll be able to form at least some logical framework for what he can and can't do.
I think the designer should be aware of the features and limitations that the tools he's using offer, and he should be aware of the limitations and the deadline of the current project that those guys are making.
He should also be aware of the background processing that's going on to show the screen UI, and all these things will come only if he has some rudimentary knowledge of programming.
He doesn't have to dabble in the depth of OOP, learn SQL, know the intricacies of reflection or anything fancy like that. He just has to know his platform well, and that I think is a requirement even for the designers.
The very core of "design" is to find a way to achieve a desired result within the constraints. If you don't anything about a part of the constraints that affects your goal, then you can't design.
It all depends on your tools.
Edit: What I mean is there are tools that are designed for designers, and tools for programmers. Eclipse, for one is not a designer tool. Photoshop is. Flash maybe, Flex not. I wouldn't require a Flash designer to program, but a Flex designer does need to program.
As for telling them about the limits of your tools it depends, really good creative designers will embrace those limits and make incredible work, mediocre designers will perceive the limits as roadblocks and stop being creative and just following the rules with fear.
I have given it some thought and based on the answers given previously, i have reached certain conclusions:
A preliminary knowledge about what is possible and what are the constraints while designing on a given platform is mandatory. This means that the graphics designer should be aware of the following:
The basic design guidelines on that platform
The standard UI toolkit / widgets provided on that platform (for ex: textboxes, drop-down lists, etc)
What is not possible (or is too cumbersome) on a given platform (for ex: creating translucent modal dialogs while fading out the background in Eclipse)
This amount of knowledge might not require the designer to dabble in programming.
The second level is where the designer is making an attempt to either create new widgets or to (knowingly) go against the set standards for a given platform. For instance, if the design includes graphs or the need to depict special relationships or a unique combination of text, graphics and images that is not implicitly supported by any standard toolsets. In this case, the designer should be aware of the technical possibilities and the limits of a given platform. In this case, i would argue that the designer should be able to write a little code and try out a few things to ascertain what might be within the realms of possibility.
Related
I typically work on web apps that will only be used by a small group of well-controlled people, but now find that I'm writing something that has the potential to be used by a very large population. This means that the design and "look" will be very important to the success.
While I can certainly code up something functional, it ain't gonna look pretty, so I know that I'll need to get an outside designer to make things look good. Never having worked that way before I had a few questions about the mechanics of how this happens and how to try to make things easier.
We do Java, so when building a rich interface, we use GWT. I know that when working with designers, they typically provide images of what the interface should look like without any type of "useable" output. My question is how best to bridge that gap between a simple drawing of an interface to a fully functional realized one.
Any thoughts are appreciated.
Well, "it depends", as always.
Nowadays, I don't think you can work wit someone who simply provides PhotoShop mockups. At least not at your level. Mockups are simply too static, and translating those mockups to actual pages that actually work with different browsers properly is a skill set all its own.
So, you need someone beyond simply a designer, especially if you are planning any javascript wizardry, animations, or other dynamic elements that don't capture at all well on a static image.
What you really want is an "operational" mockup. Static HTML files that look and behave as best as can be done to what the UI designer wants to do, including transitions, work flow, etc. This artifact can be run through all of the stake holders as a live mockup, letting folks "Feel" the site.
Once you have these HTML files, you can then do your part of backfilling these pages with actual server side content. Obviously you can start early working on models and working with the designers so as to have services ready to support the site functionality, but you shouldn't be committing any real time in to actual pages for the site.
As for interacting with the designers, I talk more about that over here: How can I make my JSP project easier for a designer to work with
I worked on a project very much like this. We had "comps" which were pictures of what the interface would look like. We identified common objects and build modules. Then built pages (this was for a web app) from modules plus any elements that were unique to that "comp".
A couple things to keep in mind that will make life much easier: use the comps/drawings as more of a recommendation rather than set in stone design. Try to identify common pieces early on and reuse code.
Also, designers aren't user experience gods. They often have a good idea of how things should work, but if you are close to your product and have a lot of product knowledge, don't be afraid to tweak the design as you and your group see fit. One thing that designers typically lack is product knowledge. They know a lot about general user experience and how a site should work, but they often won't know the in's and out's of your use cases and products.
If you are working with GWT, you should look for designers who are expert in CSS. Apart from, may be, the main layout of the website, all the application components like form fields, dialogs, tabs and grids etc. will need to be styled using CSS.
If the designers are not experienced in working with GWT, share the GWT's documentation about styling with them. It's a good idea to read these yourself as well. Specifically explore the GWT's theme-ing system.
Also try to make use of UiBinder as much as possible. This would allow you to stay as close to traditional HTML based design while still enjoying the GWT high level object oriented interfaces (both widgets and DOM).
Optionally you might want to tell the designers that GWT image bundles will automatically do "CSS Sprites" so they don't need to worry about page load performance issues related to images.
I'm writing an iPhone game and I am trying to write some requirements documents. I have never written requirements before so I got the book Software Requirements. I have not finished it yet, but I forsee some issues, as this book is targeted towards a business. My main question is I am the only person involved with this game and I feel the main purpose of the requirements document should be to nail out as many conceptual ideas of how the game works as I can before I am deep into design or construction. Does anyone have suggestions on how I should lay this out, should I still try to mimic the template provided in the book where it makes sense, or since I am both the sole developer and product owner, should I just stick to game concepts?
You're right that traditional SRS documents don't really fit games documentation all that well. Games instead have a general Game Design Document. It's usually created before any work on the game begins, and it's often edited as the development process goes to keep straight the intended end-result and specifics of the game.
While business software requirements documents are like contracts between a client and developer on what to produce, game design docs are more often specifications from the designer to the artists and programmers on what exactly they need to develop.
There is no specific layout to use. But you should consider who you're writing the document for. Is it for a class, for yourself, for peers after the project is done? The level of detail and the kind of things you include will be different depending on your audience. The format itself is very flexible, as long as it's coherent.
Brenda Brathwaite has a good blog entry on this subject which you might find helpful.
There is a semi-recent article from gamedev.net on the subject as well.
[Poor Jacob, you just read a book on the topic, and, collectively, the SO community writes another one for you, along with extra links, and probably with diverging views ;-) ]
Although I'm not familiar with the book you mention in the question, I think that the following suggestion may help you both take seriously, but also relax a bit, about the all too important question of requirements.
Being a "team of one", it is particularly important, and somewhat paradoxical, that you go through the effort of formalizing the requirements. However, rather than putting too much emphasis on the form, you may find an Agile approach to developement (and hence to requirements gathering) more appropriate. With regards to requirements, one of the main advantages of this approach, is flexibility, i.e. the understanding that while they should be formalized (with limited time/effort), requirements should be allowed to change (within limits) as part of an iterative process towards production of the target product.
In very broad terms, this generally go as follows:
write "user stories", these are individual "cards" (yes, physical cards, say 4 inches by 5 inches, are good, for you can then move then around, sort them etc.)
each story tells a particular feature of the application, here the game, from the end-user's perspective. You can/should start all cards with "As a user, I need the game to..." then follow with a particular feature, for example "... show my high score on the same page as the global high-scores are kept [because ... here optional reasons for why user may want this feature].
review each story and assign a rough estimation of the time involved in implementing it
review each story and assign a priority level (scale may vary, but something simple like "Must have from Version 1.0", "Should eventually be in there, for sure", "Would be nice to have" and "Maybe nice to have...")
organize releases, on the basis of what you can do within say 2 or 3 weeks, maximum. If a particular feature were to take too long, schedule it for a later release.
implement the features assigned to the current release
iterate through this release cycle, reviewing the requirements as you go, for the relative importance of features, and also the need of new features may become evident as with the insight provided by using the [incomplete/imperfect] intermediate releases.
Books like the one you describe are focused at a different audience, but there is value in the general concepts presented. Fully developed requirements documents are not as common as you might think. Don't let anyone think that you are a 'bad developer' for not having the most detailed requirements.
Requirements docs might be more important if you need to communicate the requirements with a co-developer.
If you are the sole developer I would strongly recommend that you spend your efforts on the design and implementation of the game, over requirements. If you have a good idea of what you build then let this flow as you build it.
Documentation can help you. The question is what is going to be most beneficial. Maybe design decisions are more critical than requirements for you but not for others. You'll maybe want to have a list of things that people have requested or ideas that you think of but cannot implement straight away. Sometimes a whiteboard can be handy for sketching out things, it's not just a tool for collaboration with other people.
Here's just a general approach...
Solidify the concept...write it in plain English first (ex: The game is a first person shooter. You kill zombies and hunt for treasure.)
Get a paper pad and pencil and draw out the general flow of the game and the main screens the users will encounter...main menu, options screen, help, etc. Make sure it makes sense.
Go to a site like mockingbird and create the detail wireframes for your screens...
Print these out and do some paper prototyping...i.e. put the printout in front of you and 'click' on a button...then bring up the appropriate screen...then click on another button, etc.
Once that makes sense, you can try to start coding your game.
Personally I believe you should use your own way to do this. The most commonly available one's will not match with your requirement. They might be suitable for a common commercial server application but not for a game. And since iPhone gaming is a new trend you may have to look in a different perspective.. You may not be able to fill a document with standard requirements and you may have different set of New type of requirements.
Just a suggestion... Sign up with Google Sites, and create a private site with documentation of the game, requirements, technical aspects, work log, etc... You can share it with select people, and it always keeps edit history.
I like it better than a Wiki because it is more structured, and just plain simple to use.
I realize that this may be subjective but I truly need an answer to this and I can't seem to find anything close enough to it in the rest of the Forum. I have read some folks say that the framework (any MVC framework) can obscure too many things while others say that it can promote good practices. I realize that frameworks are great for a certain level of programmer but what about individuals starting out? Should one just focus on the language or learn them together?
I think web development is way more than anyone grasps when they first start getting into it! Read this and know that it is all optional...but required to be really good at what you do.
I suggest that you spend time learning your language first. I would suggest learning C# simply because it is vastly more marketable and it is usually directly supported in most of MS products. By learning C# - programming in ASP.NET, console apps, servers, services, desktop apps, etc. will all be within your reach. You can program for most of the MS products as well as on many Linux type platforms.
Once you have this down then you can move to programming for the web as programming for the web has some intricacies that most other environments don't have. Concepts such as sessions, caching, state management, cross site scripting, styling, client side vs server side programming, browser support, how HTTP works, get vs post, how a form works, cookies, etc. are all at the top of the list of things to learn separately not to mention learning the ASP.NET base frameworks and namespaces.
Once you have the programming language down and then the concepts of web programming I suggest that you pause and learn database design. Don't worry about performance just yet...try to first learn good design. Performance will come next. A good start for you is Access (blasphemy I know). It is easy for a beginner to work with. And it translates into a more robust platform such as SQL Server easily. Learn at the very least some SQL...but I suggest that you learn as much as your stomach can handle. I heard someone say that SQL is like the assembly language of the database. The number one thing that slows an application to a halt is piss poor database design and poor queries. Once you have this knowledge - stuff it away in the back of your mind and take a look at a good ORM. NHybernate is probably best at the moment but is more complex that the basic learner needs. For that reason I currently suggest getting LINQ to SQL up and running as it is SUPER EASY to work with. Then look at Entity Framework (although I still think it sucks...and you should wait till EF 2.0...ERRRRR...now 4.0 released with .net 4.0). Then NHybernate.
Now is the time to start to understand the infrastructure that is required by web development. You may bump your head against this as you learn some of the web programming stuff. But you need to understand the basics of DNS, IIS, load balancers, sticky routing, round robin, clustering, fault tolerance, server hardware setup, web farms, cache farms (MemCached Win32, Velocity), SMTP, MSMQ, database mail queuing, etc. Many people may say you don't need this. That there will be some knowledgeable network admin to help you out here. However they generally know things that impact them...not you. The more you know here the more valuable you will be to the company that hires you.
Now you can get into the details of best practices and design patterns. Learn about the basics such as repository pattern, factory pattern, facade pattern, model view presenter pattern, model view controller pattern, observer pattern, and various other things. Follow Martin Fowler and others for suggestions here. Take a look at concepts such as inversion of control, dependency injection, SOLID principle, DRY, FIT, test driven design, and domain driven design, etc. Learn as much as you can here before moving to the next step.
NOW you can think about frameworks! Start by creating a basic application with ASP Classic (comes with IIS for free!). This will give you a flavor of a no frills web development environment. Take a look at ASP.NET web forms (briefly) to see how MS attempted to make things easier by hiding all the complex stuff (which you now know how to manage on your own from your readings of the above materials!!!). Now you no longer need ASP.NET Web Forms. Move immediately to ASP.NET MVC. The MVC framwork gives you all the power you need to create a good easily manageable web application. If you build something really big no framework for pure web development may be able to deal with what you need. However MVC is way more extensible for such UBER custom scenarios.
Now that you have made it through the journey to ASP.NET MVC you can take a look at things such as Microsofts Enterprise Application Blocks (such as they use at MySpace). Take a look at Elmah error logging (a must have). Look at how to build a custom SiteMapProvider for your MVC site. If you need to get into searching stuff understand Lucene.NET.
And if you made it this far...you are ready to figure out the rest on your own as it comes up! Have fun. There is a lot of room in this space for a person with some understanding of all of the above concepts.
You'll be using SOME sort of framework. The question is, what level do you want to learn at?
You'll probably not care to learn about asynchronous I/O and mutlithreaded vs. select/poll styles of web servers.
So then, your language of choice is going to provide a layer atop this, the languages preferred "web interface" API. For Java it's Servlets, the lowest level you'd typically code at for server side web applications.
You should find what this "lower level" layer is in your language and learn the API at least. You should know basic HTTP like status codes, cookies, redirects, POST vs GET, URL encoding, and possibly what some of the more important headers do.
You'll then come to appreciate what these higher level frameworks bring to the table, and be better able to evaluate what is the appropriate level of abstraction for your needs/project.
Web development requires a certain degree of organization, since it relies so much on separation of concerns. The browser, for example, is designed to display data and interact with the user. It is not designed to lookup data from a database, or perform analysis. Consequently, a web development framework can help provide services that are needed to make the browser experience a practical one.
The nice thing about employing a platform is that it will provide core components essential to the making of any web application that you won't (and shouldn't) have to think about, such as user membership, for example. Many of the design decisions and deep thinking about how to implement these services has already been done for you, freeing you to focus on what you actually want you application to do.
Of the available frameworks, I find that frameworks that implement the MVC (model-view-controller) pattern are very practical. They clearly organize different functions of web development, while giving you full control over the markup presented to the browser.
All that said, you will need some fundamental skills to fully realize web development, such as HTML, CSS, and a core programming language for the actual underlying program, whether you use a platform or not.
I don't think I agree with the Andrew. I don't think learning C is a pre requisite for web development. In fact, learning something like Javascript, Action-script or PHP is often easier due in large part to the vast numbers of sites and tutorials available, and are enough to expose you to the fundamentals of pretty much every programing language. Variable, Conditions, Loops and OOP. I just think learning C# introduces a lot of learning that isn't really relevant to web development such as pointers and memory management.
As for wether you should learn a framework first? Definitely not. Never ever. You need to be able to stand on your own two feet first and be comfortable with HTML/CSS, Server Side Scripting (PHP/ASP/Python/Ruby whatever) and love it or loathe it, but you're going to have to have a decent understanding of Flash and Action-script.
The order in which you learn these is entirely up to you. But my learning plan would go like this...
Start with HTML. It takes about half an hour to get the basics (it's made up of tags with attributes, end of lesson 1) and it's good to get it out of the way first.
Then start leaning CSS. You'll get the basics again, very quickly. But CSS is a minefield so expect to spend the rest of your life figuring it out.
Next up Action-script. Most people wouldn't agree with me, but bear with me. HTML and CSS aren't programming languages. Action-script is. And learning a programing language for the first time is difficult and tedious. The advantage Action-script has over most other languages is that the results are very visual. It's enjoyable to work with and you can sit back and take pride in your accomplishments at regular intervals. This isn't possible with server-side scripting languages or Javascript and there's a whole host of stuff you need to learn to get server side scripting up and running. You can't build space invaders in with PHP for example.
I've changed my thinking here. I would encourage beginners to ignore ActionScript and focus on Javascript. I still believe that being able to see stuff on screen quickly is a good motivator, but I would encourage people to look at canvas tag tutorials and frameworks. Javascript has come a long way since 2009, and is now the lingua franca of programming, so it's incredibly useful. My initial point about HTML and CSS not being programming languages still stands.
Then, you can start with your server side language. At the same time, you're going to have to figure out the database stuff. I recommend PHP and MySQL because it's free.
Again, I've changed my thinking here. I would encourage beginners to use Javascript on the backend (Node.js), and split their database learning between relational databases and noSql solutions such as Mongo.
Then.... learn your framework. Or better yet, roll your own. That's what I've been doing and it's supercharged my learning.
If you're getting into web development, You HAVE to know how those building blocks work. You don't have to be an expert in all the areas, but you should try to become an expert in at least one of them. If you start learning a framework before you get the fundamentals you'll be in a sticky middle ground where you don't understand why things don't work which will infuriate you, and anyone who has to work with you.
you should learn how to use framework because it would be helpfull for u in the future also it is easier to learn.
MVC will help you a lot .. trust me ... i was developing web project not using mvc and it is like mess ... (in the past there are no well know mvc and i never heard about it)
Short version: yes, and then some.
FWIW : This more generic answer may be of use to someone out there.
What: Frameworks take out tedium of using boiler-plate code again and again. They hide complexity and design issues under wizards and conventions. They also use special libraries, design patterns etc. in ways that are far from obvious to a beginner.
So using a framework is good for getting things done without knowing exactly how - like using an ATM without knowing the internals. You just add your code bits in certain places and things 'just work'.
HTML > CSS > Ruby > SQL > Rails/Javascript framework > Libraries would make for a good learning track. Rest you learn as you go along by being curious, hanging out on forums or as extended learning as need arises.
HOW: The problem starts the minute you step outside simple text-book examples (i.e. when you try to get it to do something even a bit different).
Decoding cryptic error messages when it seems like you've done everything right but things still don't work. Searching on error strings in forums may help out. Or just re-starting from scratch.
Reading up articles and books, videos, trial-and-error, hard-work, search-engines, stackoverflow/forums, local gurus, design articles, using libraries, source-code browsing are a good way to climb the learning curve gently and on a requirement basis.
Working-against-the-framework is the number one problem for beginners. Understanding what the framework expects is key to avoiding white-hair in this phase. Having enough insight to manually do what the framework automates may help reduce this second-guessing effort.
WHY: For more advanced debugging/design, it's good to know what the framework is doing under the hood esp. when things don't work as you planned. Initially you can take the help of local-gurus or forum gurus who've already done the hard work. Later as you go deeper you can take on more of that role. For example there's a "rebuilding rails" book which looks under the hood of Ruby on Rails.
Note: Some of the tips are oriented towards Ruby/Rails but you can easily substitute your favourite language/framework instead.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The organization that I currently work for seems to be heading in the direction of dictating to software developers which tools, languages, frameworks, etc. must be used. However, nobody has convinced me that this is a good thing. The main argument I have heard is that it will make training easier. But, after developing software for over 10 years, I've never relied on training to learn how to use an IDE, programming language, or anything else; so I just can't relate.
With the rapid speed at which technology evolves, and the s-l-o-w-n-e-s-s at which I know the standards will adapt, I am concerned that my customers will have requirements that I won't be able to easily implement or won't be able to implement as efficiently as I should. For example, if there is a UI requirement for an auto-complete feature in a web app, and no API has been approved for this yet, I would need to implement auto-complete myself as opposed to using one of the many APIs that provide it out of the box.
A more radical example is if my customers wanted to have Google Wave features. In that case I would want the flexibility of configuring my development environment (including the IDE) and selecting appropriate frameworks (ex: GWT) to use.
Please provide feedback on whether or not you think that software developer tools, languages, etc should be standardized and a few points to support your argument.
There is a lot of benefit for standardization. My organization has fairly set standards on what technology we will use. We realize strong benefits in the following areas ...
Hiring. It is easy to describe what technologies we are looking for and make sure our recruiters are looking for the right people.
License/Software costs. I can buy enterprise licenses easily. It gives me the opportunity to keep costs down by letting me spend more with a smaller number of vendors and thus get more leverage.
Consistency of delivery. Our teams have a very good idea of what projects will take to build, rollout and maintain because they have done it with success before (and they know the pitfalls too).
Agility. I can have one team take over for another or one individual take over for another more easily because of standardization.
Quality. We have peer reviews across teams as well as QA across teams.
Without a consistent use of a technology stack, tools, languages and frameworks, these types of benefits would be more difficult to realize. I am not closed off to new technologies, but there has to be a concrete reason beyond "what if I want to ..."
A major issue with standardization is that once standards are out there, they get stamped in concrete and are difficult to change. This is why our corporate IT environment is stuck on IE 6, and the best change control system we have access to is CVS. Given this situation, some developers break the rules, and some find jobs at more innovative companies.
You have a mixed bag here.
I wouldn't standardize on IDEs, because every developer works differently. Those who are insanely proficient in emacs may see their performance suffer if forced to use Visual Studio. I optimize my Visual Studio experience with a 30" monitor and find it incredibly productive.
However, standardizing on some tools, such as SCons or make or something to build products is perfectly reasonable.
Banning some libraries and having a process where new libraries are either approved or not is also very reasonable. I know lots of companies that ban boost, or JQuery, or ban open source libraries in general, etc. And they had good reasons for doing it. I know I got fairly upset when an intern incorporated some random "security" library he found on the internet without running it by anyone.
In the end every company is different. You have to be standardized enough to avoid serious complications and issues as people come and go, or as new products are formed and organizational structures change. But you have to be flexible enough to avoid re-inventing every wheel you need.
The important thing is to have clear reasons for adopting a certain tool or banning some other tool or library. You can't just have management dictate that thou shalt use this and not that without consulting the engineering team and making the decision for good reasons. And once decisions are made those reasons should be written down and clearly communicated.
And also, if, in the end, your favorite tool or library isn't adopted, please don't whine about it. Be adaptable and do your job, or find a new one that makes you happier.
I once worked for a manager who felt the need to innovate at every level of his software development operation. Every development tool had to be cutting edge (preferably in beta). Many of the tools he asked us to use didn't have good documentation, and training was not available. Ultimately, most of the technology we tried simply didn't work. We wasted a lot of time churning through new technologies, only to dump them when it became clear we couldn't make progress.
I tried to make the case that innovation is perfect in the area where your value proposition lies. Innovation can also be used judiciously where standard techniques fail. But for most mundane tasks, using tried-and-true tools and methods should be the default. Less risk, less cost, less management attention needed. So you can focus time and energy on the areas where innovation has the most benefit.
So I think standardization has an important role. But blindly saying everything must be standard is just as sure to fail as my manager who thought everything must be innovative.
The number one argument in favor of standardization is that it maximizes the ability of the organization as a whole to use a common body of knowledge. Don't know how custom web controls are built in ASP.NET/C#? Ask Bill down the hall who has the knowledge. If you use different tools, such organizational wisdom is cut off at the knees. While it is not good to be restricted to a least common denominator (and hopefully your management will realize this) you should not overlook the benefits of shared experience!
UPDATE: I do not agree that innovation and standardization are polar opposites. Indeed, would we have nearly the level of web innovation if we still had the mishmash of networking standards characteristic of the 1980s? No we would not. Of course, we might have more innovation on new low-level networking protocols but is that really worth it? In its place, we've had an explosion of creativity within the bounds of TCP/IP and the Web standards (http, html, etc.)
The trick is knowing how to standardize without using it as an argument for closing down all new exploration. For example, we use only ASP.NET/C#/SQL Server in my company but I'm perfectly open to the use of new tools within this framework (we recently adopted the DevExpress reporting package, for example, supplanting the earlier standard).
Standardization is a must for a productive development team. However that doesn't mean that you can't revist the standards from time to time to adjust them to new technologies and trends.
Whether you develop operations software for internal clients, or products for external clients, there is no compelling reason not to standardize. You certainly did not give one.
Had you seen how companies are struggling with holding heterogenous products together that have been maintained for 10 years or more, and are now a conglomerate of various technologies that developers at some point thought made sense, you would not have asked this question.
From the top of my head, I could name at least 2 well-known software companies that will be driven out of business because their cost of maintenance has become so high that they can no longer compete (but I won't).
I think the misconception here is that suppressing individualism would supress innovation. That is simply not true. It is poor technical leadership that suppresses innovation.
One unpleasant consequence of standardization is that it tends to stifle innovation.
Innovation is scary. It involves cost and risk.
Standardization is not scary. It reduces cost and risk in the short term. Until your competitors have created a game-changing innovation. Then standardization is very costly.
It depends on the organization I think. One like Microsoft, yes, there should be a bit of a standard. A small business with one IT department, no. A larger business with several offices around the world ... maybe.
it all depends :-P
Assuming the organization has a broad suite of enterprise applications to manage, I'd say no for the following reasons, though I may be taking the message of everything being the same a bit too literally:
Compromise on using best-of-breed for systems, e.g. if all the databases are to be MS-SQL then any Oracle DB solution is thrown out. This would also apply to the fact that everyone using an IDE has to use the same one whether they be doing Data Warehouse report development, web applications, console applications or winForms. I'm thinking of systems like ERP, CRM, SCM, CMS, SSO and various other TLAs, FLAs, and SLAs. (LA = Letter acronyms for a decoding hint if you need it)
Upgrading by committee is another interesting issue. Where if each team can choose their tools and have one person that decides it is to upgrade things, e.g. start using Visual Studio 2008 instead of Visual Studio 2005, now have to determine at what threshold is it worth it to upgrade everyone simultaneously which may be a big headache if there are more than a few developers. For example, over the past 10 years when would there be IDE changes, framework changes, etc.?
Exceptions to the standards. Could a contractor bring in something not used in the organization if they believe it helps them build better software, e.g. Resharper or other add-ons that some contractors believe are very worthwhile that the organization doesn't want to spend the money to get? What about legacy systems that may make the standard become a bit unwieldy, e.g. this was built in ASP.Net 1.1 and so everyone has to have VS 2003 installed even if most will never use it?
Just my thoughts on this.
There are several good reasons to standardize.
First, it allows the enterprise better organizational flexibility, if everybody is more or less familiar with the same things. It also allows people to help each other better. I can't help with problems in the ASP.NET stuff, and there's not all that many people who can help me on the C++ side.
Second, it reduces support problems and expenses. Oracle and SQL Server are both decent products, but using both for similar functions is only going to cause problems. Not to mention that I've been in shops using several widely different platforms to do similar things, and it wasn't fun.
Third, there are some things that just have to be standardized. We couldn't operate half with VS 2005 and VS 2008, since we keep project files under source control. We had to pick a time and convert over.
Fourth, in some businesses, it simplifies the regulatory problems. I don't know what business you're in. I work at a place where we can get away with making mistakes right now, but I've also contracted at a bank and a utility, where it's necessary to be able to show auditors that everything is going in a standard way.
Fifth, it can simplify procurement, if you're dealing with software that costs money.
This doesn't particularly limit us, since if there's something we need that isn't standardized on we just go ahead and get it or do it.
If you want to make a business case against standardization, you'll need to have a business-related argument. Your argument seems to be that you won't be able to implement features the user wants, and that is a consideration. Got another argument?
There's nothing wrong with standardizing on an IDE that is rich enough to be configured for individual developers.
However, do make sure that you don't prevent individual developers from using additional tools, as long as the tools are licensed and that the use of the tool by one developer doesn't require all other developers to use it.
For instance, I happen to use NORMA to help me design databases. The output is SQL Server DDL (or anything else I want). I can make the DDL part of the project without making my NORMA source part of it. Later developers do not need to use NORMA to work on the project.
On the other hand, if I decided to use the Configuration Section Designer to create configuration sections, then future developers would also have to use it. A decision would need to be made about whether to use that tool.
The company I work for uses C#, ASP.NET, JavaScript and generates HTML. The advantages over and above those mentioned above are that there is a perception of improved velocity for maintenance and adaptive changes. The disadvantages include generating some boredom for people who are technically savvy (geeky) and prefer to use a mix and match of languages, depending on what they fancy is better suited, or for 'performance reasons'.
Technical and personal supervision is always good to have when you are developing as fast as you can to meet tight deadlines and competing in a highly saturated market for web development.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm a new software architect/lead, coming up with software design for a team of software developers. I'm coming up with the requirement spec, interface header files, and visio software design docs, and build plan, etc.
My question is: what do the rest of the team do during this period? I'm certainly engaging them in the design, but we dont need the whole team actively working on what I'm doing all the time.
Are there any good books for new software architect?
Generally the various stages overlap, so there will be some coding during design etc. There are a lot of things to do besides that. They can be reviewing unfamiliar technology that is going to be used, setting up source control system, reviewing business requirements, reviewing your documents to make sure they make sense and are clear. There is a lot of other work to be done besides programming.
What a software team does while the lead does the design is very different from company to company. On my company we try to work on the design while the developers are finalizing other projects or solving bugs.
Another approach that I've taken when starting a whole new project is to get the developers to work on the design as well - people with a good understanding of the requirements can help you designing smaller parts of the system and writing the specs for them. Others can work on mockups, frameworks. This worked rather well for the small software team I led in a previous job (4 developers in total).
I also found it useful to have other team members research parts I'm unsure of (or even validating that things I think should work will indeed work), such as:
Investigating whether an external API provides the features we need
Writing a small proof of concept or technology demonstrator
Create an API mockup (header file, interface or REST endpoint) to investigate whether the API looks useful.
As other have said, you typically want a ramp-up period during the first part of the project, and through the first iteration. You're planning on building this iteratively, aren't you? Start with a core team (nor more than 3-4 people, since you're going to need to communicate heavily with each other) to help you explore the requirements, get a basic data model in place, identify and setup any frameworks, identify and setup build and test tools. Some coding activities typically take place in the design phase: for UI mockups, run-ahead prototypes of technically sensitive areas (whatever risks you have should be mitigated by explirative coding: be they new technologies, undocumented interfaces to integrated systems, or unstable requirements).
But coders in the design phase should help with the design, in order to get their buy-in, and to help train up the rest of the team during the first iterations. Your role during this is to ensure that the major nonfunctional requirements (e.g. are known, prioritized, are met by the design, and can be tested). You should also collaborate with the project lead or whoever else is responsible for staffing and financing in order to sketch out the iterations and the staffing levels needed. Ensure the solution can be built iteratively, and aim at implementing only a basic structure during the first iteration, both to build confidence, and to eliminate risks. (Sometimes, you can push major risks to the second iteration, and focus the first towards confidence and team building.)
And of course, be sure you are not designing every detail. You should be able to use every design artifact in the next iteration (and elaborate them later as needed). Since design decisions are expensive to change, try to postpone them. However, some influence the entire solution (for instance, the data model, or your approach to security) and absolutely must be at least outlined up front. This isn't waterfall. This is just not closing your eyes and hoping a viable architecture will emerge by magic.
But design proceeds throughout the iterations. It's just that you do less of it as you go along, and with lesser impact on the solution (unless you're unlucky... and then things get expensive).
Stop doing the useless things you do and just start coding with them! ;)
If there is no overlap with another ongoing project, getting them involved as you're doing is great, maybe push it a little further by having them prototype and present the plus and minus of alternative technologies (APIs, frameworks, libraries, etc...) that your project could use.
As a new software architect, I can recommend some books that helped me understand the role of the architect (but of course not to master it):
Fundamentals of Software Architecture An Engineering Approach:
This book gives good modern overview of software architecture and its many aspects, good place to start if you are a beginner or broaden your knowlage.
Software Architecture in Practice:
Explains what software architecture is, why it's important, and how to design, instantiate, analyze, evolve, and manage it in disciplined and effective ways.
Software Architect's Handbook:
This book takes you through all the important concepts, right from design principles to different considerations at various stages of your career in software architecture. It begins by covering the fundamentals, benefits, and purpose of software architecture.
Clean Architecture: A Craftsman's Guide to Software Structure and Design:
Learn what software architects need to achieve and how to achieve it, master essential software design principles and see how designs and architectures go wrong.
Software Architecture: The Hard Parts:
An advanced architecture book, with this book, you'll learn how to think critically about the trade-offs involved with distributed architectures.
Usually there's another project they can work on, but...
I have my team review the project specs/requirements and put together a basic/preliminary structure to get them already thinking through the application and working out specific questions.
When we convene at the table to discuss the plan they already have an idea of what the project is and requires and in some cases, they present questions I may have missed or overlooked.
Although it's too late now, a good way to approach it is to move the architect over before his current project has ended. Start freeing him up at like 25% then work your way up to 75-100% on the new project a month or two before it starts (maybe more depending on how much analysis and customer interaction there is).
On a trivial project (let's say 2 man-years) it might not be necessary, but anything bigger than that can end up in chaos if somebody doesn't at least get the analysis right before everybody jumps aboard.
If your team does not have any other projects to work on, ask experienced programmers of your your team to come up with at prototype so that you can create a requirement doc according to the needs of the client.
Also programmers novice to the technologies being used in the team could utilize this time to familiarize themselves with the technologies on which your team is going to develop the project.
architect != designer
Chances are that all of your developers can help with the design; let them. Architects don't have to be "lone wolves" and do everything themselves. You lay out the guidelines and the principles and the scaffolding, rough in the wiring, and let your developers flesh out the details - whether it is drawing Visio diagrams or building prototypes to mitigate unknowns/risks.
Migrate towards Agile/XP and away from waterfall methods, and you'll find the team a lot more help.
When making the general design, it's very handy to have programmers create proof-of-concepts. Do that especially with parts of the system that could end up being show stoppers if they don't work in the way you plan to do them, so you can think of alternatives, and adjust the design.
That's going to help you to make the right design-decisions before moving entirely into a certain direction.
Just doing a design, and then moving on and start coding is a sure way to mess up a project. You won't realize that your design is not feasible (or just plain sucks) until you're half-way coding, and by then it's too late to make radical changes.
You'll waste time mitigating non-existing problems during the design, and you'll run into unforeseen problems during implementation.