What is the difference between these web servers in Seaside: Zinc, Kom, and Swazoo? - webserver

It's been a while since I've dabbled in Seaside, and, wanting to play around with it again for a small project, I downloaded the one-click image for Pharo and thought I'd look through the documentation to get my bearings. (There is a related question about performance differences between two of these, and one about which ones can server static files, but neither explains the differences between all of them.)
The first image of A Walk on the Seaside shows two available servers in the "Seaside Control Panel": WASwazooAdaptor and WAComancheAdaptor. The download page for Seaside on Pharo says you can start either Zinc, Kom, or Swazoo as your web server, and that either of them is available as an adaptor from the Seaside Control Panel. However, that panel in my newly downloaded image only has WATestServerAdaptor, ZnZincServerAdaptor, ZnZincStaticServerAdaptor, and ZnZincStreamingServerAdaptor. The second of these is the only one available by default.
I gather from all this conflicting information that Zinc is the latest one to use, at least on Pharo - is that correct? Are the other ones all outdated? Or do they each have their strengths and weaknesses, and need to be added to the image (e.g. via Monticello)? Are Kom and Swazoo only for Squeak? When would I use the three different Zinc servers on Pharo? I'm hoping someone can clear up my confusion.

Zinc is the default, and bundled, HTTP stack framework (server/client) for Pharo since version 1.3 (Zinc). As far as I know Zinc is only supported in Pharo.
Kommanche (Kom) is the default web server of Squeak, and is only supported in Squeak as well (it "can" run in Pharo, but only a few still uses it).
Swazoo was an attempt to have a common web server among different Smalltalk dialects (it was conceived during a Camp Smalltalk event) and depended on a common set of "compatibility classes" called SPort (Smalltalk Portability), and during a while it succeeded to be the baseline of some web related solutions (I did two ports of Swazoo to Dolphin Smalltalk).
With Seaside 3, which was its primary dependant, the Adapter Pattern was choosen to provide a common API so there was no need to have a common webserver for all Smalltalk dialects, just one adapter for each web server implementation. And for platform specific features a new compatibility layer was selected (Grease), dropping the dependency with SPort as well.
Swazoo is still being used by the AIDA/web framework, mainly because its author is also one the main coders of Swazoo itself.
Regarding the different subclasses of ZnServer if you still don't know which one to use you'll be good only using ZnZincServerAdaptor startOn: 8080, you'll identify the specifc use of the other adaptors as you go.
Tip: ZnZincServerAdaptor default server debugMode: true.

Related

what's the difference between a shim repository and a repository?

for instance, this one is a shim repository for highlightjs. I know a shim or a polyfill is always used to fit in low level broswers. But I am focus in chrome only, and when I change the shim highlightjs to to normal one, it results in a lot of error.
So I wonder what's the difference between a shim repository and a repository? can anyone tell me?
The term "shim repository" has become somewhat popular for web programming components projects. Those repositories are "shims" in the sense that they created as a stand-in for those components and released in a standard format that meets the needs of 3rd-party projects and their package managers that incorporate these components.
Wikipedia defines a shim as follows
In computer programming, a shim is a small library that transparently
intercepts API calls and changes the arguments passed, handles the
operation itself or redirects the operation elsewhere. Shims can be
used to support an old API in a newer environment, or a new API in an
older environment. Shims can also be used for running programs on
different software platforms than they were developed for.
That's pretty much it.

MicroServices with Play and JSON Serialization

Let's assume that I have a couple of MicroServices with each exposing a set of REST end points. Assume that MicroService A is communicating with MicroService B and they exchange JSON data.
This JSON data needs to be Serialized and De-Serialized on both the MicroService A and B. This Serialization logic and the models are going to be the same on both the MicroService code base.
I can reduce this duplication by just moving the model classes into a small dependency and use it on both the MicroServices. Not a problem! This might go against the goal of a MicroService architecture, which is "share nothing". But I feel even more potential problem to address is code duplication. What do you guys think?
I do not see the point 'share nothing' in this scenario. As long as you will hold your De/Serializer as an Artifact in some nexus, you do not "share" anything, instead you are using an (somehow) external library.
If you use e.g. logging, both of your projects will use the e.g. slf4s, but they do not share it, as each uses it separately.
There are a number of things to bear in mind when separating out a functionality into communicating micro-services:
Tying of scala versions between server and client
If your server requires specific versions of scala (because, for example, you use a library that only exists for version 2.10), this should not impact your choice of scala version in the client. This points towards the idea of having the classes representing your communication path, as being in a separate project which can be cross-compiled separately.
Tying of libraries between server and client
The less requirements your shared library places on your client code, the better. Even forcing a particular choice of Play server enforces a level of rigidity and coupling between client and server that is best avoided.
The best option is that this library causes a dependency on zero other libraries.
Supporting protocol changes over time
One of the advantages of having separate services is that they can be upgraded and improved at separate points in time. You should always try and have the server support the previous version of your communications protocol, whenever it changes. This allows you to roll back an update easily, and also update the client at a different point in time.
Not allowing backwards compatibility means you need to update both services in lock-step. This not only reduces a lot of the advantages of using micro-services, it also makes it a huge pain to deal with rollbacks, if that becomes necessary.
The universal story here is to enforce as little as possible in the way of choice (scala version, library version, time period when protocol changes must happen) on the client, through what choices are made on the server.
If you can follow this approach, I don't see a problem with using code to enhance the accessibility of talking to a service.

How to implement XEP-0289 FMUC plugin on a XMPP server?

I need to implement a distributed XMPP MuC application on the lines of XEP-0289 minus some of the features, in essence I want to have a bare bones implementation of the plugin, my concern is to address fault-tolerance and as of now I do not want to worry about the performance considerations as specified in 289.
I have looked into SleekXmpp as a tool to develop server side plugins, but don't know how comfortable it would be to use it for such an implementation, other options I have looked at are OpenFire , Tigase. I am comfortable with Python/Java and other key features to consider would be good documentation, ease of use etc keeping that in mind I would like to know what would be the preferred path to take for this development.
Any guidance will be appreciated.
you should be able to write a MUC component that includes FMUC (or similar). The general way to do this would be to use a library that supports XEP-0114 components (e.g. SleekXMPP (Python), Swiften (C++)) and implement MUC+FMUC through that. You haven't said what your concerns with SleekXMPP are, but it's a fairly well-respected library in the XMPP community, so seems a fair choice (I'd pick Swiften, but I'm biased as one of the authors).
Your second option (patching the server directly) isn't generally the XMPPish way of adding customisations (as it's vendor-specific), but should also work if you can find someone sufficiently familiar with the server code, or if you're willing to become so.
To achieve fault tolerance (assuming you mean resilience to server failures) you'd need to run your XMPP server clustered, and also cluster your FMUC implementation. With that done, the usual XMPP fail-over using SRV records in DNS should ensure other servers retry connections to another host.
On a side note, the next version of FMUC (XEP-0289) will have some of the features of the current revision stripped out, and a number of improvements made based on deployment experience, so if your work is not time-critical, it might be of benefit to you to read that when it's released. I also note that there exists at least one implementation of FMUC already (Isode's M-Link, on which I work), and there is interest from other vendors, so using the standard protocol might benefit you in terms of not re-inventing the wheel.

Cafeteria Management System as a project. What should I use?

This is a part of my course project.
Basically, there are vendors which provide food and at peak hours the queue gets so large that people have to wait long for their order.
Our project is like an online site which will enable users to order food. After ordering the food, the user will get an info as to where does he lie in the queue. This way students can order from their hostel rooms without actually going to the vendor and getting their time wasted by waiting in the line. As soon as the user orders the food, vendor gets notified of the project so that he can start preparing the food.
I am completely new to web development so I am not sure what to use. This project will also work as an exercise to learn about web development.
I have heard about Drupal & Joomla CMS. Also, Django framework is also there and I am actually confused as to what technology to use.
I am also confused as to what is the difference between a framework and a CMS? How do they differ and which one will suit me.
So, how do I go about developing the
application?
A framework is a basic application without any concrete business logic. It contains basic structure and sometimes basic features (like database connectivity and other standard libraries). You have to write your code yourself.
A CMS is a content management system. It is essentially a complete website but without the content. it provides tools to write content (web pages). The most popular ones (like Joomla) come with a bunch of templates too that you can download to give your site any look you want.
A CMS probably doesn't have enough features to provide you with this logic. You will probably need to do some programming to get this done. It may still be useful to use a CMS, though. Lots of them support various plugins that allow you to add these kind of features and still allow you to easily edit regular pages.
Frameworks are libraries turned on their heads. You plug a library into your code; a framework turns this around by abstracting a particular problem in such a way that you plug your code into it to solve a problem. It's the Hollywood principle: "Don't call us; we'll call you."
People who write frameworks have deep knowledge of a particular problem domain. They usually represent the distillation of several attempts to solve a problem, with best practices, clear abstractions, and good plug-in points made clear from long experience.
Django is a Python framework for web applications that have a browser front end and relational databases for persistence.
That's one example of a framework.
A CMS (Content Management System) allows users to dynamically add and manage content in a web application. I think they solve slightly different problems from Django, because it is specialized to the problem of content management.
I'd recommend starting your queuing problem without a front end at all - just text. Concentrate on the subtleties of queuing. Get that right with your object model and then expose a user interface to display it to users.
CMS is a 'content management system'. If provides modules that you can plug in. The end effect is it sets up a website for you, and you have admin pages where you can enter content. For special stuff, you use plugins. If you have to, you can write your own plugins.
A development framework is just a stack of technologies you can use to develop an application. So for example, the Grails framework uses Hibernate(persistence) and Spring(dependency injection and other stuff) under the covers -- it is providing and using existing tools (which are themselves frameworks) which you will in turn use to build the application.
With a framework, you basically start with a bunch of tools in your toolbox, but little or no parts of a running web app out of the box. You have to develop the functionality with the tools. With a CMS system, it's like they have implemented something for you, but it is really generic and you will have to tailor it to your needs.

How To Create a Flexible Plug-In Architecture?

A repeating theme in my development work has been the use of or creation of an in-house plug-in architecture. I've seen it approached many ways - configuration files (XML, .conf, and so on), inheritance frameworks, database information, libraries, and others. In my experience:
A database isn't a great place to store your configuration information, especially co-mingled with data
Attempting this with an inheritance hierarchy requires knowledge about the plug-ins to be coded in, meaning the plug-in architecture isn't all that dynamic
Configuration files work well for providing simple information, but can't handle more complex behaviors
Libraries seem to work well, but the one-way dependencies have to be carefully created.
As I seek to learn from the various architectures I've worked with, I'm also looking to the community for suggestions. How have you implemented a SOLID plug-in architecture? What was your worst failure (or the worst failure you've seen)? What would you do if you were going to implement a new plug-in architecture? What SDK or open source project that you've worked with has the best example of a good architecture?
A few examples I've been finding on my own:
Perl's Module::Plugable and IOC for dependency injection in Perl
The various Spring frameworks (Java, .NET, Python) for dependency injection.
An SO question with a list for Java (including Service Provider Interfaces)
An SO question for C++ pointing to a Dr. Dobbs article
An SO question regarding a specific plugin idea for ASP.NET MVC
These examples seem to play to various language strengths. Is a good plugin architecture necessarily tied to the language? Is it best to use tools to create a plugin architecture, or to do it on one's own following models?
This is not an answer as much as a bunch of potentially useful remarks/examples.
One effective way to make your application extensible is to expose its internals as a scripting language and write all the top level stuff in that language. This makes it quite modifiable and practically future proof (if your primitives are well chosen and implemented). A success story of this kind of thing is Emacs. I prefer this to the eclipse style plugin system because if I want to extend functionality, I don't have to learn the API and write/compile a separate plugin. I can write a 3 line snippet in the current buffer itself, evaluate it and use it. Very smooth learning curve and very pleasing results.
One application which I've extended a little is Trac. It has a component architecture which in this situation means that tasks are delegated to modules that advertise extension points. You can then implement other components which would fit into these points and change the flow. It's a little like Kalkie's suggestion above.
Another one that's good is py.test. It follows the "best API is no API" philosophy and relies purely on hooks being called at every level. You can override these hooks in files/functions named according to a convention and alter the behaviour. You can see the list of plugins on the site to see how quickly/easily they can be implemented.
A few general points.
Try to keep your non-extensible/non-user-modifiable core as small as possible. Delegate everything you can to a higher layer so that the extensibility increases. Less stuff to correct in the core then in case of bad choices.
Related to the above point is that you shouldn't make too many decisions about the direction of your project at the outset. Implement the smallest needed subset and then start writing plugins.
If you are embedding a scripting language, make sure it's a full one in which you can write general programs and not a toy language just for your application.
Reduce boilerplate as much as you can. Don't bother with subclassing, complex APIs, plugin registration and stuff like that. Try to keep it simple so that it's easy and not just possible to extend. This will let your plugin API be used more and will encourage end users to write plugins. Not just plugin developers. py.test does this well. Eclipse as far as I know, does not.
In my experience I've found there are really two types of plug-in Architectures.
One follows the Eclipse model which is meant to allow for freedom and is open-ended.
The other usually requires plugins to follow a narrow API because the plugin will fill a specific function.
To state this in a different way, one allows plugins to access your application while the other allows your application to access plugins.
The distinction is subtle, and sometimes there is no distiction... you want both for your application.
I do not have a ton of experience with Eclipse/Opening up your App to plugins model (the article in Kalkie's post is great). I've read a bit on the way eclipse does things, but nothing more than that.
Yegge's properties blog talks a bit about how the use of the properties pattern allows for plugins and extensibility.
Most of the work I've done has used a plugin architecture to allow my app to access plugins, things like time/display/map data, etc.
Years ago I would create factories, plugin managers and config files to manage all of it and let me determine which plugin to use at runtime.
Now I usually just have a DI framework do most of that work.
I still have to write adapters to use third party libraries, but they usually aren't that bad.
One of the best plug-in architectures that I have seen is implemented in Eclipse. Instead of having an application with a plug-in model, everything is a plug-in. The base application itself is the plug-in framework.
http://www.eclipse.org/articles/Article-Plug-in-architecture/plugin_architecture.html
I'll describe a fairly simple technique that I have use in the past. This approach uses C# reflection to help in the plugin loading process. This technique can be modified so it is applicable to C++ but you lose the convenience of being able to use reflection.
An IPlugin interface is used to identify classes that implement plugins. Methods are added to the interface to allow the application to communicate with the plugin. For example the Init method that the application will use to instruct the plugin to initialize.
To find plugins the application scans a plugin folder for .Net assemblies. Each assembly is loaded. Reflection is used to scan for classes that implement IPlugin. An instance of each plugin class is created.
(Alternatively, an Xml file might list the assemblies and classes to load. This might help performance but I never found an issue with performance).
The Init method is called for each plugin object. It is passed a reference to an object that implements the application interface: IApplication (or something else named specific to your app, eg ITextEditorApplication).
IApplication contains methods that allows the plugin to communicate with the application. For instance if you are writing a text editor this interface would have an OpenDocuments property that allows plugins to enumerate the collection of currently open documents.
This plugin system can be extended to scripting languages, eg Lua, by creating a derived plugin class, eg LuaPlugin that forwards IPlugin functions and the application interface to a Lua script.
This technique allows you to iteratively implement your IPlugin, IApplication and other application-specific interfaces during development. When the application is complete and nicely refactored you can document your exposed interfaces and you should have a nice system for which users can write their own plugins.
I once worked on a project that had to be so flexible in the way each customer could setup the system, which the only good design we found was to ship the customer a C# compiler!
If the spec is filled with words like:
Flexible
Plug-In
Customisable
Ask lots of questions about how you will support the system (and how support will be charged for, as each customer will think their case is the normal case and should not need any plug-ins.), as in my experience
The support of customers (or
fount-line support people) writing
Plug-Ins is a lot harder than the
Architecture
Usualy I use MEF. The Managed Extensibility Framework (or MEF for short) simplifies the creation of extensible applications. MEF offers discovery and composition capabilities that you can leverage to load application extensions.
If you are interested read more...
In my experience, the two best ways to create a flexible plugin architecture are scripting languages and libraries. These two concepts are in my mind orthogonal; the two can be mixed in any proportion, rather like functional and object-oriented programming, but find their greatest strengths when balanced. A library is typically responsible for fulfilling a specific interface with dynamic functionality, whereas scripts tend to emphasise functionality with a dynamic interface.
I have found that an architecture based on scripts managing libraries seems to work the best. The scripting language allows high-level manipulation of lower-level libraries, and the libraries are thus freed from any specific interface, leaving all of the application-level interaction in the more flexible hands of the scripting system.
For this to work, the scripting system must have a fairly robust API, with hooks to the application data, logic, and GUI, as well as the base functionality of importing and executing code from libraries. Further, scripts are usually required to be safe in the sense that the application can gracefully recover from a poorly-written script. Using a scripting system as a layer of indirection means that the application can more easily detach itself in case of Something Badâ„¢.
The means of packaging plugins depends largely on personal preference, but you can never go wrong with a compressed archive with a simple interface, say PluginName.ext in the root directory.
I think you need to first answer the question: "What components are expected to be plugins?"
You want to keep this number to an absolute minimum or the number of combinations which you must test explodes. Try to separate your core product (which should not have too much flexibility) from plugin functionality.
I've found that the IOC (Inversion of Control) principal (read springframework) works well for providing a flexible base, which you can add specialization to to make plugin development simpler.
You can scan the container for the "interface as a plugin type advertisement" mechanism.
You can use the container to inject common dependencies which plugins may require (i.e. ResourceLoaderAware or MessageSourceAware).
The Plug-in Pattern is a software pattern for extending the behaviour of a class with a clean interface. Often behaviour of classes is extended by class inheritance, where the derived class overwrites some of the virtual methods of the class. A problem with this solution is that it conflicts with implementation hiding. It also leads to situations where derived class become a gathering places of unrelated behaviour extensions. Also, scripting is used to implement this pattern as mentioned above "Make internals as a scripting language and write all the top level stuff in that language. This makes it quite modifiable and practically future proof". Libraries use script managing libraries. The scripting language allows high-level manipulation of lower level libraries. (Also as mentioned above)