Software Requirement Specifications for Web Applications [closed] - specifications

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm looking for some guidance/books to read when it comes to creating a software requirement specification for a web application. For inspiration I have read some spec documents for desktop based applications. The documents I have read capture a systems functional requirements in use cases which tend to be rather data oriented with use cases centered around the various CRUD operations the application is intended to perform.
I like this structure however I'm finding it rather difficult to marry it to what my web application needs to do, mostly reading data as opposed to manipulating it. I've had a go at writing some use cases however they all tend to boil down to "Search for item", "Change view of search results" or "User selects facet to refine search results". This doesn't sound quite right to me and makes me wonder if I'm going about this the right way.
Are there planning differences between web based and desktop based applications?

In my experience, there is really nothing wrong as having all the specifications being CRUD. Most of the time, any application isn't just "a simple CRUD app." Requirements evolve and different parts of the systems tend to diverge and acquire some specific logic.
Even if it feels like repeating the same CRUD sentences over and over, actually writing them down and thinking about it (instead of copy & pasting) will often uncover hidden requirements.

The differences between desktop based applications and web based applications is staggering.
I recommend reading these in exactly this order and apply this knowledge in exactly the opposite order, aside from CSS 3, HTML 5, and XHTML 1.1:
RFC 3986 - URI
RFC 2616 - HTTP 1.1
RFC 4346 - TLS 1.1
RFC 4251 - SSH Protocol
RFC 4252 - SSH Authentication
RFC 4253 - SSH Transport
RFC 2045 - MIME
RFC 4627 - JSON
HTML 4.01
XML
XHTML 1.0
XHTML 1.1
ECMAScript
CSS 2
HTML 5 (Not a standard)
CSS 3 (Not a standard)
Web Content Accessibility Guidelines 2.0
Symantec Internet Security Threat Report Volume XIV
Symantec Internet Security Threat Report Volume XV
OWASP Top 10
SEO
Once you have finished reading this you should begin to understand how the basic technology of the web works. Only at this point would you be ready to develop, conformantly, for a web application. There are many other technologies at play, but these are the basics and once you are familiar with the basics you will know where else to look for more information.

Basically you can sue the same method as for desktop applications, although you might make some addition, because we applications often tend to have different type of requrements. First of all, read something good about Use Cases, there are different use case levels and that might be a solution to your use cases which do not seem so right. Also do not forget about use case generalization and parametrized use cases if CRUD repetition is the problem. One thing, which is often more important in web applications than in desktop apps is the aspect of usability. This is because of the nature of the web - people have ofthe the coice of not using your service and go to next google result if you app is not usable. So what I think is a good addition to the spec are Personas - just find some possible instances of the human actors for your use cases and try to think of some goals they might want to achieve often using your web app and present how they will achieve them using your web app (and try to make it super easy of course). Another important thing is the Information Architecture - the way in which you will provide information in your web app. This comprises of navigation, some basic layout, but not necessarily design, just information about where to find something in your web app. This can be done using some rapid prototyping tools.

Related

Choosing a framework for both app and website [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There is an old php base website which provides a single service and is working for many years. Now we have decided to change the website itself to become more modern and faster and at the same time providing an app for it and we are going to hire some people for that. As I'm a developer myself (a c++/go fan and mostly in machine learning area) I want to know what are the best options to choose when such thing is needed. I personally think that the whole back-end should work as a RESTful API and then probably something like react and react-native in front for both web and mobile but I'm not sure if this is the best decision we can make.
I want to know what other people with same requirements have done ? Some people recommended us meteor but it seems to be changing very fast and we really need some stable and mature solution without too much maintenance.
Looking forward to hearing your suggestions
I'm a big fan of the REST API + front-end JS framework architecture.
One option would be to build an API with Ruby on Rails. The "rails generate new" command includes an --api option for generating a Rails application that lacks views and serves JSON. You can learn more about building APIs with Rails on the rails-api gem GitHub page. (Keep in mind the rails-api gem has since been rolled into Rails proper.)
Overall, Rails is a fast way to get a services layer up and running. It's fairly simple, and might be a good option for your app, which as you said, provides just one service. However, Rails is also powerful enough to support a much more substantial API.
If you want a REALLY simple Ruby-based services layer, you should check out Sinatra. You could also go full Javascript with Express. It's about as simple as Sinatra.
If you have a background with C++ and Go, you may not want to jump headfirst into these web-heavy technologies. Consider using Java Spring for your services layer. (I would link but I don't have enough rep. Haha)
As far as the front-end goes, you're on the right track with React and Meteor. I'm personally a fan of Angular (specifically Angular 2). It's a really popular JS web app framework -- great for asynchronous Javascript and single page applications. Granted, Angular has a steep learning curve to start, but if you're willing to climb, it pays off.
Let me know if you have any specific questions! Good luck.
Meteor is a bit heavy if all you want is a web site. It depends how much interactivity you want - if it's more of an app, then it's worthwhile doing, and if you want to do ios and android apps, then Meteor is pretty fast to get up and running.
I would recommend to choose a technology that either you are keen to learn, or you have some skills in already. Learning curves mean longer projects.
If you need native mobile application and interactive site than react and react-native with pure API server is a really good solution. I developed several similar services and very satisfied with this combination.
First of all you can share API access layer and part of business logic between both clients.
Second you are using identical technologies on mobile and web. However mobile application has real native UI because React Native don't use browser.
Thirdly server becomes more simple, maintainable and scalable.
Personally I prefer WebSockets as network protocol (look at amazing Go gorilla websocket implementation). But if you need support old browsers or prefer more classic REST API than it's ok also.
Answering based on my comment above, this is all opinionated, but I find that Node/JavaScript tends to be about the best bang for the buck, if all you are trying to do is get through proof of concept stage, and possibly scaling from there. Depending on your back-end needs, you may want to migrate parts, or whole to Go or Python.
On the Front end, using web tooling (mostly centered around node and npm), I would build the primary UI with React and the material-ui components... Targeting a web/browser interface... you can then extend this, in order to support more native-like features. You can build "native" versions using Cordova for Android and iOS. You can use electron as a base for desktop versions.
For the backend, node is a nice place to start, and I feel RethinkDB is a great database for many use cases (despite the changes from commercial to open platform happening now). Mongo is another option, and there are hosted/managed options through a few providers. Alternatively, I'd probably lean towards PostgreSQL via Amazon.
The API server, I'd definitely do a first pass with Node, and then, as needed rewrite whole, or in parts with Go or Python... Go will have better performance and scale without compromising some of the ease of development aspects. Python has very rich tooling, if you need more flexibility than performance for some areas. There are other options, C# is becoming a very good cross-platform option, which I happen to really appreciate as a language. The performance there is very good, though you may find some constraints less appealing than JS or Python.
YMMV with any of these choices.

What is the use of REST in distributed web application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am learning about REST API and I am unable to understand how REST is used in distributed web application?
This is the only reference i have seen. But still I am unable to learn about REST for distributed computing.
Thank You in advance.
It isn’t really clear to me, what you are asking with your question, but generally - REST is just another way to do RMI (Remote Method Invocation) or RPC (Remote Procedure Call). However, while RMI works only within Java, REST uses the HTTP Protocol to for communication. Since HTTP is implemented in most of the technologies / libraries / languages we use today, it is an easy way to connect them.
Originally SOAP (Simple Object Access Protocol) was used to implement inter server and client-server communications. SOAP has many additional features on top of HTTP. The WSDL (Web Services Description Language) allows for automatic proxy generation, for instance. While those features make SOAP rich and perfect fit for Enterprise applications (CERN implemented their own version of SOAP which allowed state-full communication), it was also too bulky for many, quickly changing, smaller companies.
REST uses features from HTTP but can basically vary in many ways. The url's and the objects can be defined freely, as well as the format in which objects are serialised (mostly JSON). This feature paired with dynamic languages like Python, Ruby or JavaScript (Client, or NodeJS) makes it very easy to set up communication between different services as well as client and service (SPA - Single Page Application, for the later).
However, I think the very interesting fact is, that people discovered, you have to pay a high price for this elasticity:
JSON represented objects - although smaller than XML - are still larger than byte-code (this can be solved to a large part by gzip, but creates a second problem)
Serialising and Deserialising object to/from strings is very inefficient und much slower than byte-code representations. (And of course zipping the strings to reduce size also costs CPU)
So far there was the choice - HTTP, which is inefficient, or RMI, which is inflexible and can be used only by few languages. This is why there are two projects to solve this:
Protocol Buffers from Google https://code.google.com/p/protobuf/
Apache Thrift http://thrift.apache.org/
Both of those projects allow you to use a specific binary format, to you and your messages. And because both projects have implementations in different languages (Apache Thrift many more than Google's Protocol Buffer) you can use this format to communicate between different servers.
Also a direct end-to-end communication is not always what you want, which is, why there are different Messaging Queues which can fulfil many tasks additionally to message forwarding (for instance publish-subscribe, round robin delivery to a set of services, ...) The probably most widely used one is ZeroMQ.
Conclusion
You can use REST to communicate between different services in your distributed web applications. And this is also often used, due to the simplicity of implementing such a communication channel between many different hosts and technologies. However the overhead in serialising / deserialising can cost you a lot of CPU-Time, especially if you have a large backend infrastructure with many services. This is why you should rather choose one of the binary formats (Apache Thrift, Protocol Buffers), to ensure efficiency.

Enterprise Web Application Architecture Question

I am wondering if it would be possible to develop an enterprise-level web application without the use of a standard MVC structure and application server by carrying the business/flow logic and session data to the client-size Javascript and make it talk to REST data services directly...maybe we could make use of an authorization/authentication layer and a second validation layer sitting on top of the data services. All these services operate on standard HTTP methods, support configurable logging&monitoring, and content or query parameters are all contained in the HTTP request/response body. Static HTML and Javascript are served to the browser and the rest is carried out by Javascript functions talking to the HTTP-based authorization/authentication, validation and then data services. Do you think this kind of an architecture could satisfy enterprise-level web application requirements?
It's possible but unlikely; what are the drivers that suggest this architecture to you? Is it just to be different or are there some specific aspects that this best addresses?
by carrying the business/flow logic
and session data to the client-side
Javascript and make it talk to REST
data services directly
In theory you'd still be able to have an appropriately layered solution (Business Logic (BL) script vs UI focused script) but practically speaking it'd be messy, and you'd lose the ability to physically separate it into different tiers. This could "bite" you at any number of places in the life of the system.
"Enterprise" grade systems are seldom small, I hate to think how much logic you'd be having to send over the wire to support a given action/process.
Putting all the BL into a scripting language ties you to that platform, and platforms change over time. The bad thing about scripts is that whilst they are stable to a degree I'd suggest they are more exposed to change than server-based platforms like Java or .Net. In an enterprise scenario, the servers will have very tight change control and upgrade paths mapped out for them - whereas browsers are much more open to regular change.
There's the issue of compatibility - unless you're tied to a specific browser (to the version level) guaranteeing consistent behavior is going to be harder, and will likely require more development effort. Let's say you deliver the solution successfully; what do you do when the business wants to take advantage of mobile computing - say iPads? Your only option is going to be a browser - you won't be able to take advantage of any of the native advantages of the platform. "The web and browsers" might seem like they'll be around forever - but then I'm guessing that what MainFrame folks said at the time. A server-centered solution is going to give you more life for less expense.
Staffing will be an issue - you'll need very strong JavaScript and server-side developers.
Security: having your core BL out on the client where it's much more exposed sounds very dangerous.
EDIT:
Web Apps can be sow for many reasons - not many of which are reason enough to put all your BL in JavaScript on the client. Building apps for performance is a whole field of endeavor on its own - I suggest you get more familiar with Architecting and implementing for performance before you write off n-tier web apps altogether :)
Regarding keeping your layers separated: there are different ways of doing this but it boils down to abstraction - and more correctly to keeping good design principles in mind; if you haven't heard of SOLID that would be a good place to start. In terms of implementation start reading up on Dependency Inversion (FYI - self-promotion, the articles mine and is .Net focused, but you should have no problem tracking down Java-based ones too).

How do you collaboratively write specs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working with a small team (2 others) of developers that are geographically dispersed, and I'm looking for good ways for us to collaborate on specs... We're thinking we might use Google Docs to write the spec in so we can all have access to modify it in a central location.
What have you done? What good ideas do you have?
If you have an intranet or VPN, I would actually consider installing and using a small Wiki for these specs.
Compared to Google docs you get:
Much better versioning and change tracking (IMHO)
Much easier to start new documents for subsections
An actual markup rather than WYSIWYG (a matter of taste, I prefer LaTeX to Word).
Possible to attach variety of other file types
Very easy to backup
Very easy to create an offline version
You don't have to worry about storing sensitive materials elsewhere.
The disadvantage is that it is not WYSIWYG, which may or may not be an issue to you.
Of course, you can pick a Wiki implementation that supports a better editor, and possibly even a synchronous collaboration one.
Google Wave - exactly what it's meant for - collaboration
IMHO, a word processor is the wrong tool for a programmer. A spec should be written in a plain text editor, and utilize lightweight markup such as reStructuredText, AsciiDoc etc.
The benefits of such an approach are:
There are excellent tools to manage plain text, that are already in the hands of programmers (VCS, automated build systems, diff, patch, programming editors, grep, etc.)
A markup language allow for expressing intent rather then formatting.
That in mind, a Wiki seem to be the obvious choice.
Personally my tool chain of choice is:
reStructuredText as the markup language.
Trac as a Wiki
Firefox + the it's all text extension
Emacs + rst-mode
The choice of technology is one issue and Google docs is a good choice IMHO. But the real challenge is how to manage the process e.g. divide the tasks.
My suggestion is to first make sure that the platform and all related technologies are decided-upon as best as feasible. Then, compose a a thorough table of contents. A well-designed TOC will allow you to divide tasks properly and not "step" on each others' work. From then on you each "flesh" out your assigned sections as well as review each others' work.
In effect, each TOC subsection becomes an atomic unit of work that can be assigned and maintained by an individual who is also accountable for said section(s).
Good luck!
I think it depends on
How heavily into writing the specs you all are
If you're likely writing at the same time
Whether you intend to publish the specs.
Google Docs is nice and easy to get started with. It's also great that you can now export folders all at once. Still, for something that's going to be published to the web, a wiki or general cms is a better presentation vehicle. A wiki will also integrate with your existing site.
If you've got small specs, primarily written by one person then use whatever tool is available where you're hosting the project code or website. If you're not likely to be editing at the same time then a wiki is good.
I've done the wiki thing, the passed document thing and the Google Docs thing.
The wiki thing has a low starting effort and lasts a pretty long time. At a certain size it does get to be a pain.
The passed document thing (writes, email, edit, email, etc) only works while one person is starting everything up. As soon as there are even minor edits then it sucks.
The Google Docs thing is fine until you have several docs and several editors or want to publish it online.
hth
This isn't programming related, but I've personally used Google Docs to write shared documents and found it easy to use.
I would suggest enabling Google Gears however, in the event that the Google servers go down momentarily or an internet connection isn't available.
For writing specs collaboratively, you could try Gingko.
It's a card-tree editor, which means it's a mix between index cards and an outliner, with real-time collaboration and full Markdown support (as well as basic LaTeX).
We are still missing several features (version history, comments, etc), but for some the benefits of having everything in a tree structure outweigh these drawbacks.
Writing specs with it is great, because you can create a card for each user story, and drill into it as much as you like (and organize them into categories if you'd like).
http://gingkoapp.com

Developing an asset/node based CMS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'd like to develop a CMS for fun/personal using asset-based architecture rather than page-based (why, is the purpose of this question), but I can't find much information on the subject. All I've found barely scrapes the surface (there's a good chance I'm searching with the wrong terms).
An asset-based CMS stores information
as blocks of text called assets. These
individual assets are then related to
each other to automatically build
pages.
What are the (dis/)advantages of such a system?
What are the primary principles of asset-based architecture?
What should and shouldn't be an 'asset'? Where can I read more?
Decided to try to answer this after leaving my comment :)
If your definition of "asset" is along the lines of a "node" (such as in Drupal), or a document (such as the JSON-style documents in MongoDB or CouchDB), then here is some info:
I'll use the term "node" for this post. I think it's closest to "asset" and more popularly used. This also might be a very abstract answer, but hopefully it will at least get you thinking and pointed in the right direction.
Node-based architecture, could be described as a cross between neural networking patterns and object-oriented programming. The key is that "nodes" are points of data, and nodes can be connected to each other in some way.
Some architectures will treat nodes much like object-oriented classes, where you have different classes of nodes that can inherit various characteristics of parent nodes - every type of node inherits the basic properties of its parent - an "Essay" node might inherit the properties of a "Text-Document" node, which in turn inherits the properties of the base node. Drupal implements this inheritance model well, although it does not emphasize the connections between nodes in the way that something like Facebook's GraphAPI/Open Graph Protocol does.
This pattern of node-based architecture can be implemented at any level too, and exists in nature - think of social circles within society or ecosystems ;) On a software engineering level, it can take the form of a database, such as how MongoDB simply has nodes of data (which are called documents in that case). These documents can reference other documents, although, like Drupal, Mongo does not emphasize connectedness. Ironically, relational databases like MySQL that are the opposite of document-based databases actually emphasize connectedness more, but that's a discussion for another day. Facebook's GraphAPI that I mentioned above is implemented on a Web-API level. The Open Graph Protocol shapes it. And again, something like Drupal is implemented at the front-end level (although its back-end implements the node pattern on a lower level, of course).
Lastly, node-based architecture is much more flexible than traditional document/page based CMS architecture, but that also means there is a lot more programming and configuring to be done on the side of the developers. A node-based system will end up being far more inter-connected and its components will be integrated with one-another a deeper level, but it can also be more susceptible to breaking because of this deep level of connection - it is less than separated into individual modules. Personally, I see a huge trend where people are moving to become more "node-based" and less "content-based" as people begin to interact with websites more like applications than as electronic magazines as they did in the 90's. Plus, the node-pattern fits well with the increasing emphasis on user-contribution and social browsing because adding people and their accounts/profiles to a web site dramatically increases the complexity.
I know you said "asset," so I'll also say that asset emphasizes the data side of the node pattern more, whereas "node" emphasizes the connections between the pieces of data more.
But for further reading, I'd recommend reading up on the architecture of the software I mentioned. You could also check out node.js, JSON, and document-based databases, and GraphAPI's as they seem to fit well with this idea of asset/node-based architecture. I'm sure Wikipedia has some good stuff on these patterns as well.
You could very quickly scale this up using the CakePHP framework. It uses an MVC pattern and it provides classes called elements that may be inserted into layouts and can load whatever content you want based on the page, user, moon phase, etc.
<page>
<element calls methodX>
<element calls methodY>
<Default Content relies on Controller Action(view/edit/add/custom)>
<element calls methodZ>
</page>
I think you might be describing a CMS backed up by a content repository.
The repository itself is implemented by Apache Jackrabbit based on JSR 170:
The API should be a standard, implementation independent, way to access content bi-directionally on a granular level within a content repository. A Content Repository is a high-level information management system that is a superset of traditional data repositories. A content repository implements "content services" such as: author based versioning, full textual searching, fine grained access control, content categorization and content event monitoring. It is these "content services" that differentiate a Content Repository from a Data Repository.
For a CMS working on top a content repository, look at Nuxeo.