I've been looking at these too, both seem to provide fully functional XMPP servers in Java. I know Tigase is designed in a very modular way, not looked at Openfire in as much detail yet.
My intended use would be to create a custom IM-based app, using XMPP for convenience rather than to open my server up to talk to other XMPP servers.
I'm trying to evaluate my needs based on the following, roughly in order of importance:
Documentation coverage & community
How easy to plug in own functionality
Licensing/cost - I don't plan to release my code
Maturity and stability
Do not use Openfire if you expect to scale beyond a couple of thousand concurrent connections.
Tigase is amazing at handling hundreds of thousands concurrent connections and is wonderfully architected for largely distributed platforms where XMPP is simply the external interface. It comes with a price of rather poor documentation. You often need to go and read the source code to understand what's going on.
Openfire is perfect for small setups and its API is simple and very well documented. Unfortunately, it's not architected to scale even nearly close to what tigase is capable of.
Tigase is GPL(even version 3) licensed opposed to OpenFire being under Apache license ... for closed source application is OpenFire the go.
It is embaddable and proven to be reliable - 1000s of concurrent users. It even has gateways to communicate with legacy networks - like ICQ.
Only drawback I can se here is that it can handle only one domain per instance(port), however from your description that should not be a problem.
I totally agree with #Yuriy in that Tigase is great for high scalability whereas Openfire is more suitable for small, novice IT running chat for a SMB. I have gone into more detail on this in my blog on Tigase vs Openfire.
And Openfire 3.7.0.beta is out since some days now.
Lots of bug fixes, now also support Solaris as host system.
Concerning openfire ... it seems to be more or less abandoned and certainly not because of lack bugs to fix ;)
Related
I am the manager of a small IRC server of 100-300 simultaneous connected users since now 8 years, I am under UnrealIRCd. I see many competitors replacing their UnrealIRCd by InspIRCd and I would like to understand why they do that?
What are the benefits of InspIRCd?
There are many ways you can compare the two IRC servers, a good comparison could be found at Comparison of Internet Relay Chat daemons
Few additions as well are:
Both ircd projects are up to date.
Both have a rich modules libraries.
Both have a good recent commit history & issue tracker.
They have almost the same feature support, although InspIRCD is in favor.
For me personally, i prefer InspIRCD, i feel they have the edge, with accepting new ideas and implementing features.
At the end, base on those comparison, it doesn't really matter, both of them are doing a great job all around, and are well distinguished between others.
I was thinking about developing an app that enables the user to remotely check the progress of a longrunning task. The server application running the task is an existing commercial tool and comes with a proprietary client to connect to the server to manage it. However, the client is available only for windows computers and not for mobile devices, hence my desire to fill the gap.
The communication between client and server is neither encrypted nor password protected in any way.
What would be the best way to analyze or reverse engineer such a proprietary protocol?
Are there any legal implications (I know this is not the place to ask legal stuff, but if you happen to know how to reverse engineer stuff you maybe know whether it is legal or not, too)?
I'm a fan of http://www.wireshark.org/ for protocol analysis. Free, powerful, extensible, cross-platform.
As regards legal stuff: It depends on jurisdiction - and each country's courts seem to enjoy not coming up with consistent precedents. The general rule is reverse-engineering is okay for 'interoperability'. You'd really have to ask a lawyer for more info, though.
Personally if something is running on my machine and I want it to behave in a different way, I have no ethical issues forcing it to. That's just me, though.
I can fully imagine a virus writer ringing me up and making some kind of legal threats that I breached his EULA...
I've just been chatting with a Microsoft MVP, and he told me that MSMQ is obsolete. Is this true? What's the infrastructure for SOA then???
Well, they released a new version (4.0) with Vista, and it's an explicitly available channel in WCF, out of the box, so I'd say no.
I've built new services based on it in the last six months, and there's no official MS documentation I'm aware of that says it's going away.
If you need reliable messaging, there aren't many good options. If you're already deep in database land, Service Broker may make sense, but otherwise... MSMQ
I don't think it's obsolete at all. Do a google search for MSMQ and WCF - you'll get lots of results.
Here is a good article:
http://www.codeproject.com/KB/WCF/WCF_MSMQ_Integration.aspx
It seems that SaaS and Cloud computing are old concepts with new names, and I am curious if I am wrong.
For cloud computing you can look at: Difference between cloud computing and distributed computing?
Basically, it seems that when we have been hosting that that is cloud computing, it is just that now some companies have put in much great resources to ensure better uptime than my local ISP. But, it seems that there is nothing really new here.
For REST, it seems that it is what we have been doing with cgis for 15 years.
Here is a question on REST: What am I not understanding about REST?
It appears that REST is an old concept, and I am curious how it is different than has been done since the early days of the web, and, to a large extent, the early days of using telnet (which http is on top of).
Am I mistaken in my simplification of these? I try to see how what is new is like what I know so I can see what more has to be learned in that topic, but for cloud computing and REST it seems that very little needs to be learned.
You are both right and wrong. You are right in the sense that new ideas are normally similar to old ideas, and indeed cloud computing is based significantly on distributed computing.
What is new in cloud computing is
virtualization
self-service
With virtualization, you can run multiple operating systems on a single hardware. While that, in itself, isn't new, either, it was never considered in distributed systems as a relevant piece of the architecture. Using virtualization allows self-service: users can create their own clusters of nodes without the administrator of the hardware taking any action. This allows a significant acceleration of deployment, and a significant reduction of cost.
For ReST, what you are missing is the client API. It is true that on the server side, a ReST service can be implemented with CGI. What is new here is that it is not an end user which retrieves the URL, but a program.
Saying that HTTP is on top of telnet ignores realities; this is like saying that we made no progress since the introduction of copper wires for communication. Strictly speaking, HTTP is not in top of telnet, but on top of TCP (which telnet is also on top of, these days).
Considering Roy's dissertation coined the term REST back in 2000, you can definitely argue that there is nothing new about REST. Additionally, the REST architectural style was synthesized from successful existing practices, so REST implementations pre-date the definition. Having said that, there is nothing simple about designing REST interfaces. Ever since Netscape first abused cookies to allow servers to maintain session state people have been swimming upstream against the web.
REST's recent resurrection has come mainly from people becoming disillusioned with SOAP based Web Services. SOAP tried to hide HTTP instead of embracing it and I think people are starting to realize how effective HTTP can be as an distributed application protocol that can do more than just deliver HTML to web browsers.
RESTful web applications don't use session state, so one could argue that by that virtue alone it is different than most web applications in existence at the moment.
As for Cloud Computing, I find myself agreeing with Larry Ellison for once in my life.
I'm in agreement on what you've posted. You might consider making this community wiki since it's likely to garner many answers based on opinion. Cloud computing seems to have taken off as a buzzword, and this is largely due to a decrease in cost for mass quantities of hardware. And then there is REST which is really just a formal name and definition for something that has been in place for a long time. Some people like to encapsulate ideas with buzzwords and acronyms. Sometimes it's useful to put a name to an idea though.
Not only this, the concept of things being old concepts with new names is old. It's hard to be original these days :P
You are right about REST -- its mostly old concepts with a lot of added pedantry and not much added substance.
Cloud computing has a small but fundamental difference from distributed computing. In distributed computing you had servers dedicated to particular functions, and usually some sort of directory service to locate the correct server. In cloud computing any server is capable of any task and usually the servers queue up for work which is distributed from a central point.
We need a good CMS that supports data clustering (managing and storing data on different servers). By "good" , I mean : reliable , minimum bugs , the faster the better. (Oh , and it should make coffee :) )
If you want everything and the kitchen sink plus clustering/scaling support, I'd say Plone. Very big community, written in Python, uses the Zope stack so it has a built in application server. Etc, etc. I suggest taking a look at it.
Yes … kitchen sink + community + support: Plone. Development heading very much in the right direction.
Plone is in some ways a different creature from many other systems. Depending on the environment, ultra-high performance may require some attention but in the community there's great expertise to steer any attention that may be required.
http://plone.org/support | Chat Room is a great venue for diverse and honest advice on this subject. We regularly steer people away from Plone -- when some other system will better suit their needs.
I agree, and I think that you need to look for software to fit your need. I have a few sites that only get minimal traffic that run on WordPress, but I also admin a site that runs Joomla and gets reliable amounts of traffic.
Also, Joomla has a wonderfully customizable interface with extensions, plugins, themes and a fairly easy to use administration tool.
I am not sure about "Performance-oriented" means for you. There are sites with Drupal and Joomla that receives million of visits month after month, and do not need special configurations like data clustering.
I think you must ask yourself if you need all you said.
For reliability, and no bugs or minimum bugs i can stand for Joomla.
I think the performance is a function of the hardware.
When you get to data clustering levels, your better off doing some real testing of CMS systems.
Most of the bigger names support a lot of things.
MS CMS Server, DotnetNuke
Anything used by really large shops should work.