SmartGWT, ZK and GenericFrame - Online Homework - bandwidth

Good day,
Our school, a small high school in semi-rural New Zealand, is currently looking into online homework solutions. Being one of the IT guys, I have been asked to look into some of the options. We have checked around and there are no robust solutions that cover what we are looking for. So, we are considering development of our own system, either on our own or in collaboration with some other schools.
Before I put significant time into any one option, I would thought I should ask for some expert advice.
Please keep in mind that one of our major obstacles is that around 20% of our students are on dial-up because broadband is not available in their area.
We are also not limited to the technologies listed, they just are the ones that we have been looking into up to this point.
With that in mind, here goes.
1. Is there a way to pre-determine the bandwidth needed for these technologies?
2. If bandwidth continued to be too limiting, could the final solution stand alone so we could distribute it to students on CD or USB stick?
3. What are some pros/cons of each for use with databases, specifically mysql or postgresql? (After all we do need to keep track of lots of data)
4. What are some pros/cons of each for of these RIA development?
I appreciate everyone for sharing their time and expertise on the matter.
Cheers,
Ben

1) If you write full-AJAX application, such as in GWT, the bandwitch will be:
a) the size of application java script, images, etc., you may consider that everything is loaded when user logs in (cache for images may seems to be big, but it's easily overloaded)
b) the size of communication - in GWT it depends only from you! no magic full-frame reloading, sending is only what YOU are wanting to send
2) I do not catch your point, stand alone applications can be distributed such way, applications that use databases generally can't
3) postgresql has high compatibility with Oracle - same transaction+select for update behaviour, pgPLSQL is highly inspired by PL/SQL (easy to rewrite stored procedures).

I personally suggest MySQL for a school project for its simplicity. PostgreSQL is powerful but a bit complicate to configure and the visual tool for optimizing queries not good.
Without considering the bandwidth, I definitely suggest ZK since, again, it is much easier to learn, to develop and to maintain (also much more powerful). The bandwidth consumption and latency of GWT really depends how much effort you want to invest, and how skillful your people are familiar with distributed computing, while the network bandwidth is basically the states of UI (not data), which is reasonably small. In short, you could have the best network bandwidth and latency if you optimize it at the best with GWT, while ZK is less to worry but, if you want to improve, you have to use jQuery (i.e, in JavaScript).

Thanks lechlukasz, I appreciate your comments and insight.
I will clarify my point about stand alone applications. We have a number of students, as high as 20%, who do not have access to broadband due to their geographic location. We are considering, as part of the design, how we may be able to distribute a stand alone version.
For instance, if we were to abstract all the database calls using a separate class in GWT, we could recompile a stand alone version that didn't make the database calls. The database would likely only be for tracking results and reporting.
In reality, we would likely implement the front end product first with references to empty methods for storing the results in a database and implement those methods at a later time.
For the record, we have started to code up some test cases using GWT/SmartGWT and are pleased with the results. Although we cannot comment on the other technologies considered because we didn't try them to the same extent, we are pleased with the results to this point of the project.
Cheers,
Ben

Related

What does a web-based framework scalable?

thanks you very much in advance.
First of all, I conceive scalability as the ability to design a system that doest not change when the demand of its services, whatever they are, increases considerably. May you need more hardware (vertically or horizontally0? Fine, add it at your leisure because the system is prepared and has been designed to cope with it.
My question is simple to ask but presumably very complex to answer. I would like to know what you I look at in a framework to make sure it will scale accordingly, both in number of hits and number of sessions running simultaneously.
This question is not about technology nor a particular framework at all, it is more a theoretical question.
I know that depend very much on having a good database design and a proper hardware behind with replication, etc... Let's assume that this all exists, however yet my framework must meet some criteria, what?
Provide a memcache?
Ability to run across multiple machines (at the web server level) and use many replicated databases? But what is in the software that makes that possible?
etc...
Please, let's not relate the answers with any particular programming language or technology behind.
Thanks again,
D.
I think scalability depends most of all on the use case: do you expect huge amounts of data, then you should focus on the database, if it's about traffic, focus on the server, is it about adding new features, focus on your data-model and the framework you are using...
Comparing a microposts-service like Twitter to a university website or a webservice like GoogleDocs you will find quite different requirements.
First of all the common notion of scalability is the ability of a software to improve in throughput or capacity if more hardware resources are added (CPUs, memory, bandwidth etc).
Software that does not improve in increased resources is not scalable.
Getting out of the definitions, I think your question is related to evaluation of frameworks you are planning to introduce to your implementation that may affect your software's ability to scale.
IMHO the most important factor to evaluate when introducing a framework is to see if there is hidden serialization in it (that serialization in effects transfers to/affects your software)
So if you introduce a framework that introduces serialization in your application that can affect your ability to scale.
How to evaluate?
Careful source code inspection (if open source)
Are there any performance guarantees offered by those that build the
framework?
Do measurements yourself to see how introducing this framework
affects your performance and replace if not satisfied

Data persistence in Smalltalk / Seaside

I've been spending some time lately getting acquainted with Smalltalk and Seaside. I'm coming from the Java EE world and as you can imagine it's been challenging getting my mind around some of the Smalltalk concepts. :)
At the moment I'm trying to grasp how data persistence is most typically implemented in the Smalltalk world. The assumption for me as a Java programmer is to use RDMS (ie. MySQL) and ORM (ie. Hibernate). I understand that is not the case for Smalltalk (using Hibernate at least). I'm not necessarily seeking the method that maps most closely to the way it is done in Java EE.
Is it most common to save data into the image, an object store or RDMS? Is it even typical for Smalltalk apps to use RDMS?
I understand there is no one-size-fits-all approach here and the right persistence strategy will depend on the needs of the application (how much data, concurrency, etc). What's a good approach that can start simple but also scale?
I've watched a video of Avi Bryant discussing the strategy he used for persistence and scaling DabbleDB. From what I understand, the customer's data was saved right into the image (one image per customer). That worked in his use case since customers didn't have to share data. Is this a common approach?
Hope I didn't make this TLDR. Many thanks to the insight you Smalltalk guys have provided in my previous questions. It's appreciated.
Justin,
don't worry, Smalltalk is not so different form other languages in this area, it just adds the Image based persistence option.
There are O/R mappers like Hibernate for Smalltalk, the GLORP and its Pharo port DBXtalk are surely the most popular ones these days. These should feel very comfortable for you if you know Hibernate.
Then there are OODB solutions like GemStone or Magma DB or VOSS and many others that let you leave all the O/R-mapping problems behind. Most of these are pretty limited to storing Smalltalk objects, GemStone being an exception in providing bridges to Ruby and other languages.
There also are tools to store Smalltalk objects in modern NoSQL databases like CouchDB, Cassandra, GOODS or others. The trick here is just the conversion of Smalltalk object values to JSON streams and a little HTTP-requesting.
Finally there is the option of saving your complete Smalltalk image. I'd say you can do that in a production environment, but it's not the standard or preferred way of dong it for many people. You do it a lot in development, because you can simply save an image and resume your work the next time exactly with all objects in place as you had them when you saved.
So the base line is: All the storage options you know are available in Smalltalk as well, plus one extra.
Joachim
I guess it basically depends on how big your DB is going to be and what kind of load will it be handling.
In my case, all apps I ever wrote use image persistance with disk serialization. Essentially, you just serialize your objects by using Fuel at request. In my case, I do so every time an important piece of data is dealt with, plus a regular process that serializes them every 24 hours. The image is also automatically saved every 24 hours.
The biggest application I wrote by using this approach is handling all the business processes of a small company of 10 workers plus around 50 freelancers who have been using it every day for a year and a half. The workload is pretty "big" taking in account the application deals with big files all the time, but the app has stayed stable and fast. Switching to a new server and updating the Pharo image was as easy as getting the project back from monticello and materializing the latest serialized "database".
In my opinion, ORM is an unnecessary pain, we're in the object world, and having to flatten our objects feels just wrong, especially when we have nice object-oriented solutions.
So, if your app handles fairly small amounts of data, I'd suggest either my simple approach or SandstoneDB. If your app deals with huge amounts of transactions and data, I'd go Gemstone.
Just my two cents.
Ramon Leon describes the situation, basic strategies, and their tradeoffs beautifully in his blog post.
I would start with his Simple Image Based Persistence framework, which I ported and use in Pharo 1.3. Mariano Martinez Peck recently adapted it to use Fuel (same link). It's very simple, does the job, and gives me much more confidence to play in my image, knowing that even if I permanently damage it, all my data is safe. I just copy the data folders to the new image folder, load my packages, and all my objects are alive in the new image.

SmartFox server

Im currently working on an iphone app project. The app is based on a simple chat function between 2 or more people who have registered to the app. Iv outsourced the project. The developers working on the project would like to use Smartfox servers for the Client and Server side communication. They said its easier to manage and setup and is more efficient.
However Im not sure what the disadvantages are of using the Smartfox framework and whether I should just ask them to develop/code the client and server communication rather than using this framework.
Please let me have your suggestions on this issue.
Thank you
The usual response is: it depends on your budget, your time and needs.
If you just want to make a chat without advanced features, you may make it yourself. I tell "may" because if already made solutions exist why reinvented the wheel?! (except for the price).
However if you envisage to have a lot of users, some cool features or other you should consider a third solution (like Smartfoxserver, Electroserver, or other). They provide robust solutions with a good documentation. Moreover they offered a tons of features, new one appear regularly, there are updated, etc. Below a small non-exhaustive list of pros and cons of using Smartfox rather than a homemade solution, in my opinion:
Advantages compared with a homemade solution:
Gain time
Robust solution
Performance
Multi-platform
Scalability (in time and concurrent users)
Deployment
Network engine fully functional (TCP/UDP, HTTP Tunneling, etc.)
Low learning curve
Low maintenance costs
Tons of features (in your case Buddy Lists, Moderation, Filters, etc.)
etc.
Disadvantages:
Price (for > 100 CCU) (it takes a long time to develop a homemade solution though + maintenance cost)
Many features that you will not use
I hope it'll help you in your reflection.

Regarding NOSQL - Alternatives to RDBMS

I have been stumbled on things like RDBMS alternatives very often now a days... And i am following some of the open source implementation..
What I understand is: it is best suited for the web apps in large scale (like google & amazon).. they mainly concentrated on very large distributed data stores..
how this could help small start ups looking for a existing costly alternative data stores.. and is this really yield both performance & maintanance gain for small applications?
I just started this discussion and belive somebody here already got same frustration trying these new approaches earlier and may gain experience in it.. this may help start ups like us..
It all depends on your scaling requirments. RBDMS require locks to work and so can only really be scaled "up". NoSQL-style DBs such as Googles bigtable and CouchDB are massively scalable and very cheap, but can get very complicated to write an app on top of as developers have to deal with all kinds of data consistency/fault tolerance issues in thier application layer.
I would say for a small application you're probably better off with a SQL-based relational database. Whilst in theory much more expensive, being realistic at a small scale that price trades off as a much simpler system to work with.
If however you're start up is a muti-tenant solution which needs to deal with a lot of writes, I'd look carefully at alternatives.

How do I plan an enterprise level web application?

I'm at a point in my freelance career where I've developed several web applications for small to medium sized businesses that support things such as project management, booking/reservations, and email management.
I like the work but find that eventually my applications get to a point where the overhear for maintenance is very high. I look back at code I wrote 6 months ago and find I have to spend a while just relearning how I originally coded it before I can make a fix or feature additions. I do try to practice using frameworks (I've used Zend Framework before, and am considering Django for my next project)
What techniques or strategies do you use to plan out an application that is capable of handling a lot of users without breaking and still keeping the code clean enough to maintain easily?
If anyone has any books or articles they could recommend, that would be greatly appreciated as well.
Although there are certainly good articles on that topic, none of them is a substitute of real-world experience.
Maintainability is nothing you can plan straight ahead, except on very small projects. It is something you need to take care of during the whole project. In fact, creating loads of classes and infrastructure code in advance can produce code which is even harder to understand than naive spaghetti code.
So my advise is to clean up your existing projects, by continuously refactoring them. Look at the parts which were a pain to change, and strive for simpler solutions that are easier to understand and to adjust. If the code is even too bad for that, consider rewriting it from scratch.
Don't start new projects and expect them to succeed, just because your read some more articles or used a new framework. Instead, identify the failures of your existing projects and fix their specific problems. Whenever you need to change your code, ask yourself how to restructure it to support similar changes in the future. This is what you need to do anyway, because there will be similar changes in the future.
By doing those refactorings you'll stumble across various specific questions you can ask and read articles about. That way you'll learn more than by just asking general questions and reading general articles about maintenance and frameworks.
Start cleaning up your code today. Don't defer it to your future projects.
(The same is true for documentation. Everyone's first docs were very bad. After several months they turn out to be too verbose and filled with unimportant stuff. So complement the documentation with solutions to the problems you really had, because chances are good that next year you'll be confronted with a similar problem. Those experiences will improve your writing style more than any "how to write good" style guide.)
I'd honestly recommend looking at Martin Fowlers Patterns of Enterprise Application Architecture. It discusses a lot of ways to make your application more organized and maintainable. In addition, I would recommend using unit testing to give you better comprehension of your code. Kent Beck's book on Test Driven Development is a great resource for learning how to address change to your code through unit tests.
To improve the maintainability you could:
If you are the sole developer then adopt a coding style and stick to it. That will give you confidence later when navigating through your own code about things you could have possibly done and the things that you absolutely wouldn't. Being confident where to look and what to look for and what not to look for will save you a lot of time.
Always take time to bring documentation up to date. Include the task into development plan; include that time into the plan as part any of change or new feature.
Keep documentation balanced: some high level diagrams, meaningful comments. Best comments tell that cannot be read from the code itself. Like business reasons or "whys" behind certain chunks of code.
Include into the plan the effort to keep code structure, folder names, namespaces, object, variable and routine names up to date and reflective of what they actually do. This will go a long way in improving maintainability. Always call a spade "spade". Avoid large chunks of code, structure it by means available within your language of choice, give chunks meaningful names.
Low coupling and high coherency. Make sure you up to date with techniques of achieving these: design by contract, dependency injection, aspects, design patterns etc.
From task management point of view you should estimate more time and charge higher rate for non-continuous pieces of work. Do not hesitate to make customer aware that you need extra time to do small non-continuous changes spread over time as opposed to bigger continuous projects and ongoing maintenance since the administration and analysis overhead is greater (you need to manage and analyse each change including impact on the existing system separately). One benefit your customer is going to get is greater life expectancy of the system. The other is accurate documentation that will preserve their option to seek someone else's help should they decide to do so. Both protect customer investment and are strong selling points.
Use source control if you don't do that already
Keep a detailed log of everything done for the customer plus any important communication (a simple computer or paper based CMS). Refresh your memory before each assignment.
Keep a log of issues left open, ideas, suggestions per customer; again refresh your memory before beginning an assignment.
Plan ahead how the post-implementation support is going to be conducted, discuss with the customer. Make your systems are easy to maintain. Plan for parameterisation, monitoring tools, in-build sanity checks. Sell post-implementation support to customer as part of the initial contract.
Expand by hiring, even if you need someone just to provide that post-implementation support, do the admin bits.
Recommended reading:
"Code Complete" by Steve Mcconnell
Anything on design patterns are included into the list of recommended reading.
The most important advice I can give having helped grow an old web application into an extremely high available, high demand web application is to encapsulate everything. - in particular
Use good MVC principles and frameworks to separate your view layer from your business logic and data model.
Use a robust persistance layer to not couple your business logic to your data model
Plan for statelessness and asynchronous behaviour.
Here is an excellent article on how eBay tackles these problems
http://www.infoq.com/articles/ebay-scalability-best-practices
Use a framework / MVC system. The more organised and centralized your code is the better.
Try using Memcache. PHP has a built in extension for it, it takes about ten minutes to set up and another twenty to put in your application. You can cache whatever you want to it - I cache all my database records in it - for every application. It does wanders.
I would recommend using a source control system such as Subversion if you aren't already.
You should consider maybe using SharePoint. It's an environment that is already designed to do all you have mentioned, and has many other features you maybe haven't thought about (but maybe you will need in the future :-) )
Here's some information from the official site.
There are 2 different SharePoint environments you can use: Windows Sharepoint Services (WSS) or Microsoft Office Sharepoint Server (MOSS). WSS is free and ships with Windows Server 2003, while MOSS isn't free, but has much more features and covers almost all you enterprise's needs.