Related
I'm currently in school and have been tasked with objectively evaluating a software (atlassians Jira platform). I'm currently having issues in staying objective. For example, saying that the platform is "easy to use" is according to my opinion of the platform and not so much based on evidence. So I'm curious to hear from you guys if you know if any scientific method to evaluate software or services? I've currently done a survey asking users how they use Jira and what they think about the platform. But I feel that this is not enough I would like to have some numbers that can point to how good or bad the software is.
The fist thing to mention is that a scientific work is always a collective work. Keep in mind that others might already have done such an scientific work you might use. So you have to create a small team or look for well-founded scientific work throu the internet or contacts in universities if you have such contacts.
If there are no results that fits you have to create knowledge. In this case a mathematical based decision will help. The Decisiontable might be the source for a scientifc decisions. The Decisiontable requires a couple of possible decisions, a couple of factors to respect in a specific weight. It contains the analysis and synthesis. After you have created the Decisiontable you should discus it in a critical team until the team agrees the results (and might offer them to the public).
I spent short time studing Habanero and i found it good approach for making Enterprise Application in a really short period of time.
The pattern witch Habanero use is "Active Record" as it's developers say.
My questions are:
There any similar application like Habanero witch fully support Domain
Driven Design by determining aggregate roots, entities and value objects
Is it right decision to use such tools in big organizations
Does it worth training our team on such a tool
thank you
Framework support for Domain Driven Design is quite different from frameworks supporting data driven applications. Such framework should increase the productivity of developers that works with an ubiquitous language that evolves with the business and that is learned by a domain-expert.
They should not face concepts like aggregates, root, value objects because they are just modelling concepts, conceptual tools, but ways to ease the development process. Thus a framework exposing abstract classes or interfaces named AggregateRoot, Entity or ValueObject is fundamentally broken. It doesn't provides any real value to an application, just useless indirections.
However:
There are a few frameworks designed to support domain driven design, listed here. Moreover, I'm developing one by myself based on previous experiences that worked very well
It depends, obviosly. For example we used all of the Epic's modeling patterns with success.
We used some "home made" framewoks too, and some of them proved to really increase productivity. However, such frameworks (if useful) always have steep learning curves and it depends very much on how much reliable the software have to be and what are the developers skills.
It depends on the framework, on the complexity of the business (if you don't need a domain expert to understand it, you don't need DDD) and on the developers, too. I faced successful stories and huge failures with different frameworks in different contexts. I've also had a conference that faced the topic (you can see the slides here).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The organization that I currently work for seems to be heading in the direction of dictating to software developers which tools, languages, frameworks, etc. must be used. However, nobody has convinced me that this is a good thing. The main argument I have heard is that it will make training easier. But, after developing software for over 10 years, I've never relied on training to learn how to use an IDE, programming language, or anything else; so I just can't relate.
With the rapid speed at which technology evolves, and the s-l-o-w-n-e-s-s at which I know the standards will adapt, I am concerned that my customers will have requirements that I won't be able to easily implement or won't be able to implement as efficiently as I should. For example, if there is a UI requirement for an auto-complete feature in a web app, and no API has been approved for this yet, I would need to implement auto-complete myself as opposed to using one of the many APIs that provide it out of the box.
A more radical example is if my customers wanted to have Google Wave features. In that case I would want the flexibility of configuring my development environment (including the IDE) and selecting appropriate frameworks (ex: GWT) to use.
Please provide feedback on whether or not you think that software developer tools, languages, etc should be standardized and a few points to support your argument.
There is a lot of benefit for standardization. My organization has fairly set standards on what technology we will use. We realize strong benefits in the following areas ...
Hiring. It is easy to describe what technologies we are looking for and make sure our recruiters are looking for the right people.
License/Software costs. I can buy enterprise licenses easily. It gives me the opportunity to keep costs down by letting me spend more with a smaller number of vendors and thus get more leverage.
Consistency of delivery. Our teams have a very good idea of what projects will take to build, rollout and maintain because they have done it with success before (and they know the pitfalls too).
Agility. I can have one team take over for another or one individual take over for another more easily because of standardization.
Quality. We have peer reviews across teams as well as QA across teams.
Without a consistent use of a technology stack, tools, languages and frameworks, these types of benefits would be more difficult to realize. I am not closed off to new technologies, but there has to be a concrete reason beyond "what if I want to ..."
A major issue with standardization is that once standards are out there, they get stamped in concrete and are difficult to change. This is why our corporate IT environment is stuck on IE 6, and the best change control system we have access to is CVS. Given this situation, some developers break the rules, and some find jobs at more innovative companies.
You have a mixed bag here.
I wouldn't standardize on IDEs, because every developer works differently. Those who are insanely proficient in emacs may see their performance suffer if forced to use Visual Studio. I optimize my Visual Studio experience with a 30" monitor and find it incredibly productive.
However, standardizing on some tools, such as SCons or make or something to build products is perfectly reasonable.
Banning some libraries and having a process where new libraries are either approved or not is also very reasonable. I know lots of companies that ban boost, or JQuery, or ban open source libraries in general, etc. And they had good reasons for doing it. I know I got fairly upset when an intern incorporated some random "security" library he found on the internet without running it by anyone.
In the end every company is different. You have to be standardized enough to avoid serious complications and issues as people come and go, or as new products are formed and organizational structures change. But you have to be flexible enough to avoid re-inventing every wheel you need.
The important thing is to have clear reasons for adopting a certain tool or banning some other tool or library. You can't just have management dictate that thou shalt use this and not that without consulting the engineering team and making the decision for good reasons. And once decisions are made those reasons should be written down and clearly communicated.
And also, if, in the end, your favorite tool or library isn't adopted, please don't whine about it. Be adaptable and do your job, or find a new one that makes you happier.
I once worked for a manager who felt the need to innovate at every level of his software development operation. Every development tool had to be cutting edge (preferably in beta). Many of the tools he asked us to use didn't have good documentation, and training was not available. Ultimately, most of the technology we tried simply didn't work. We wasted a lot of time churning through new technologies, only to dump them when it became clear we couldn't make progress.
I tried to make the case that innovation is perfect in the area where your value proposition lies. Innovation can also be used judiciously where standard techniques fail. But for most mundane tasks, using tried-and-true tools and methods should be the default. Less risk, less cost, less management attention needed. So you can focus time and energy on the areas where innovation has the most benefit.
So I think standardization has an important role. But blindly saying everything must be standard is just as sure to fail as my manager who thought everything must be innovative.
The number one argument in favor of standardization is that it maximizes the ability of the organization as a whole to use a common body of knowledge. Don't know how custom web controls are built in ASP.NET/C#? Ask Bill down the hall who has the knowledge. If you use different tools, such organizational wisdom is cut off at the knees. While it is not good to be restricted to a least common denominator (and hopefully your management will realize this) you should not overlook the benefits of shared experience!
UPDATE: I do not agree that innovation and standardization are polar opposites. Indeed, would we have nearly the level of web innovation if we still had the mishmash of networking standards characteristic of the 1980s? No we would not. Of course, we might have more innovation on new low-level networking protocols but is that really worth it? In its place, we've had an explosion of creativity within the bounds of TCP/IP and the Web standards (http, html, etc.)
The trick is knowing how to standardize without using it as an argument for closing down all new exploration. For example, we use only ASP.NET/C#/SQL Server in my company but I'm perfectly open to the use of new tools within this framework (we recently adopted the DevExpress reporting package, for example, supplanting the earlier standard).
Standardization is a must for a productive development team. However that doesn't mean that you can't revist the standards from time to time to adjust them to new technologies and trends.
Whether you develop operations software for internal clients, or products for external clients, there is no compelling reason not to standardize. You certainly did not give one.
Had you seen how companies are struggling with holding heterogenous products together that have been maintained for 10 years or more, and are now a conglomerate of various technologies that developers at some point thought made sense, you would not have asked this question.
From the top of my head, I could name at least 2 well-known software companies that will be driven out of business because their cost of maintenance has become so high that they can no longer compete (but I won't).
I think the misconception here is that suppressing individualism would supress innovation. That is simply not true. It is poor technical leadership that suppresses innovation.
One unpleasant consequence of standardization is that it tends to stifle innovation.
Innovation is scary. It involves cost and risk.
Standardization is not scary. It reduces cost and risk in the short term. Until your competitors have created a game-changing innovation. Then standardization is very costly.
It depends on the organization I think. One like Microsoft, yes, there should be a bit of a standard. A small business with one IT department, no. A larger business with several offices around the world ... maybe.
it all depends :-P
Assuming the organization has a broad suite of enterprise applications to manage, I'd say no for the following reasons, though I may be taking the message of everything being the same a bit too literally:
Compromise on using best-of-breed for systems, e.g. if all the databases are to be MS-SQL then any Oracle DB solution is thrown out. This would also apply to the fact that everyone using an IDE has to use the same one whether they be doing Data Warehouse report development, web applications, console applications or winForms. I'm thinking of systems like ERP, CRM, SCM, CMS, SSO and various other TLAs, FLAs, and SLAs. (LA = Letter acronyms for a decoding hint if you need it)
Upgrading by committee is another interesting issue. Where if each team can choose their tools and have one person that decides it is to upgrade things, e.g. start using Visual Studio 2008 instead of Visual Studio 2005, now have to determine at what threshold is it worth it to upgrade everyone simultaneously which may be a big headache if there are more than a few developers. For example, over the past 10 years when would there be IDE changes, framework changes, etc.?
Exceptions to the standards. Could a contractor bring in something not used in the organization if they believe it helps them build better software, e.g. Resharper or other add-ons that some contractors believe are very worthwhile that the organization doesn't want to spend the money to get? What about legacy systems that may make the standard become a bit unwieldy, e.g. this was built in ASP.Net 1.1 and so everyone has to have VS 2003 installed even if most will never use it?
Just my thoughts on this.
There are several good reasons to standardize.
First, it allows the enterprise better organizational flexibility, if everybody is more or less familiar with the same things. It also allows people to help each other better. I can't help with problems in the ASP.NET stuff, and there's not all that many people who can help me on the C++ side.
Second, it reduces support problems and expenses. Oracle and SQL Server are both decent products, but using both for similar functions is only going to cause problems. Not to mention that I've been in shops using several widely different platforms to do similar things, and it wasn't fun.
Third, there are some things that just have to be standardized. We couldn't operate half with VS 2005 and VS 2008, since we keep project files under source control. We had to pick a time and convert over.
Fourth, in some businesses, it simplifies the regulatory problems. I don't know what business you're in. I work at a place where we can get away with making mistakes right now, but I've also contracted at a bank and a utility, where it's necessary to be able to show auditors that everything is going in a standard way.
Fifth, it can simplify procurement, if you're dealing with software that costs money.
This doesn't particularly limit us, since if there's something we need that isn't standardized on we just go ahead and get it or do it.
If you want to make a business case against standardization, you'll need to have a business-related argument. Your argument seems to be that you won't be able to implement features the user wants, and that is a consideration. Got another argument?
There's nothing wrong with standardizing on an IDE that is rich enough to be configured for individual developers.
However, do make sure that you don't prevent individual developers from using additional tools, as long as the tools are licensed and that the use of the tool by one developer doesn't require all other developers to use it.
For instance, I happen to use NORMA to help me design databases. The output is SQL Server DDL (or anything else I want). I can make the DDL part of the project without making my NORMA source part of it. Later developers do not need to use NORMA to work on the project.
On the other hand, if I decided to use the Configuration Section Designer to create configuration sections, then future developers would also have to use it. A decision would need to be made about whether to use that tool.
The company I work for uses C#, ASP.NET, JavaScript and generates HTML. The advantages over and above those mentioned above are that there is a perception of improved velocity for maintenance and adaptive changes. The disadvantages include generating some boredom for people who are technically savvy (geeky) and prefer to use a mix and match of languages, depending on what they fancy is better suited, or for 'performance reasons'.
Technical and personal supervision is always good to have when you are developing as fast as you can to meet tight deadlines and competing in a highly saturated market for web development.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm a new software architect/lead, coming up with software design for a team of software developers. I'm coming up with the requirement spec, interface header files, and visio software design docs, and build plan, etc.
My question is: what do the rest of the team do during this period? I'm certainly engaging them in the design, but we dont need the whole team actively working on what I'm doing all the time.
Are there any good books for new software architect?
Generally the various stages overlap, so there will be some coding during design etc. There are a lot of things to do besides that. They can be reviewing unfamiliar technology that is going to be used, setting up source control system, reviewing business requirements, reviewing your documents to make sure they make sense and are clear. There is a lot of other work to be done besides programming.
What a software team does while the lead does the design is very different from company to company. On my company we try to work on the design while the developers are finalizing other projects or solving bugs.
Another approach that I've taken when starting a whole new project is to get the developers to work on the design as well - people with a good understanding of the requirements can help you designing smaller parts of the system and writing the specs for them. Others can work on mockups, frameworks. This worked rather well for the small software team I led in a previous job (4 developers in total).
I also found it useful to have other team members research parts I'm unsure of (or even validating that things I think should work will indeed work), such as:
Investigating whether an external API provides the features we need
Writing a small proof of concept or technology demonstrator
Create an API mockup (header file, interface or REST endpoint) to investigate whether the API looks useful.
As other have said, you typically want a ramp-up period during the first part of the project, and through the first iteration. You're planning on building this iteratively, aren't you? Start with a core team (nor more than 3-4 people, since you're going to need to communicate heavily with each other) to help you explore the requirements, get a basic data model in place, identify and setup any frameworks, identify and setup build and test tools. Some coding activities typically take place in the design phase: for UI mockups, run-ahead prototypes of technically sensitive areas (whatever risks you have should be mitigated by explirative coding: be they new technologies, undocumented interfaces to integrated systems, or unstable requirements).
But coders in the design phase should help with the design, in order to get their buy-in, and to help train up the rest of the team during the first iterations. Your role during this is to ensure that the major nonfunctional requirements (e.g. are known, prioritized, are met by the design, and can be tested). You should also collaborate with the project lead or whoever else is responsible for staffing and financing in order to sketch out the iterations and the staffing levels needed. Ensure the solution can be built iteratively, and aim at implementing only a basic structure during the first iteration, both to build confidence, and to eliminate risks. (Sometimes, you can push major risks to the second iteration, and focus the first towards confidence and team building.)
And of course, be sure you are not designing every detail. You should be able to use every design artifact in the next iteration (and elaborate them later as needed). Since design decisions are expensive to change, try to postpone them. However, some influence the entire solution (for instance, the data model, or your approach to security) and absolutely must be at least outlined up front. This isn't waterfall. This is just not closing your eyes and hoping a viable architecture will emerge by magic.
But design proceeds throughout the iterations. It's just that you do less of it as you go along, and with lesser impact on the solution (unless you're unlucky... and then things get expensive).
Stop doing the useless things you do and just start coding with them! ;)
If there is no overlap with another ongoing project, getting them involved as you're doing is great, maybe push it a little further by having them prototype and present the plus and minus of alternative technologies (APIs, frameworks, libraries, etc...) that your project could use.
As a new software architect, I can recommend some books that helped me understand the role of the architect (but of course not to master it):
Fundamentals of Software Architecture An Engineering Approach:
This book gives good modern overview of software architecture and its many aspects, good place to start if you are a beginner or broaden your knowlage.
Software Architecture in Practice:
Explains what software architecture is, why it's important, and how to design, instantiate, analyze, evolve, and manage it in disciplined and effective ways.
Software Architect's Handbook:
This book takes you through all the important concepts, right from design principles to different considerations at various stages of your career in software architecture. It begins by covering the fundamentals, benefits, and purpose of software architecture.
Clean Architecture: A Craftsman's Guide to Software Structure and Design:
Learn what software architects need to achieve and how to achieve it, master essential software design principles and see how designs and architectures go wrong.
Software Architecture: The Hard Parts:
An advanced architecture book, with this book, you'll learn how to think critically about the trade-offs involved with distributed architectures.
Usually there's another project they can work on, but...
I have my team review the project specs/requirements and put together a basic/preliminary structure to get them already thinking through the application and working out specific questions.
When we convene at the table to discuss the plan they already have an idea of what the project is and requires and in some cases, they present questions I may have missed or overlooked.
Although it's too late now, a good way to approach it is to move the architect over before his current project has ended. Start freeing him up at like 25% then work your way up to 75-100% on the new project a month or two before it starts (maybe more depending on how much analysis and customer interaction there is).
On a trivial project (let's say 2 man-years) it might not be necessary, but anything bigger than that can end up in chaos if somebody doesn't at least get the analysis right before everybody jumps aboard.
If your team does not have any other projects to work on, ask experienced programmers of your your team to come up with at prototype so that you can create a requirement doc according to the needs of the client.
Also programmers novice to the technologies being used in the team could utilize this time to familiarize themselves with the technologies on which your team is going to develop the project.
architect != designer
Chances are that all of your developers can help with the design; let them. Architects don't have to be "lone wolves" and do everything themselves. You lay out the guidelines and the principles and the scaffolding, rough in the wiring, and let your developers flesh out the details - whether it is drawing Visio diagrams or building prototypes to mitigate unknowns/risks.
Migrate towards Agile/XP and away from waterfall methods, and you'll find the team a lot more help.
When making the general design, it's very handy to have programmers create proof-of-concepts. Do that especially with parts of the system that could end up being show stoppers if they don't work in the way you plan to do them, so you can think of alternatives, and adjust the design.
That's going to help you to make the right design-decisions before moving entirely into a certain direction.
Just doing a design, and then moving on and start coding is a sure way to mess up a project. You won't realize that your design is not feasible (or just plain sucks) until you're half-way coding, and by then it's too late to make radical changes.
You'll waste time mitigating non-existing problems during the design, and you'll run into unforeseen problems during implementation.