It seems that SaaS and Cloud computing are old concepts with new names, and I am curious if I am wrong.
For cloud computing you can look at: Difference between cloud computing and distributed computing?
Basically, it seems that when we have been hosting that that is cloud computing, it is just that now some companies have put in much great resources to ensure better uptime than my local ISP. But, it seems that there is nothing really new here.
For REST, it seems that it is what we have been doing with cgis for 15 years.
Here is a question on REST: What am I not understanding about REST?
It appears that REST is an old concept, and I am curious how it is different than has been done since the early days of the web, and, to a large extent, the early days of using telnet (which http is on top of).
Am I mistaken in my simplification of these? I try to see how what is new is like what I know so I can see what more has to be learned in that topic, but for cloud computing and REST it seems that very little needs to be learned.
You are both right and wrong. You are right in the sense that new ideas are normally similar to old ideas, and indeed cloud computing is based significantly on distributed computing.
What is new in cloud computing is
virtualization
self-service
With virtualization, you can run multiple operating systems on a single hardware. While that, in itself, isn't new, either, it was never considered in distributed systems as a relevant piece of the architecture. Using virtualization allows self-service: users can create their own clusters of nodes without the administrator of the hardware taking any action. This allows a significant acceleration of deployment, and a significant reduction of cost.
For ReST, what you are missing is the client API. It is true that on the server side, a ReST service can be implemented with CGI. What is new here is that it is not an end user which retrieves the URL, but a program.
Saying that HTTP is on top of telnet ignores realities; this is like saying that we made no progress since the introduction of copper wires for communication. Strictly speaking, HTTP is not in top of telnet, but on top of TCP (which telnet is also on top of, these days).
Considering Roy's dissertation coined the term REST back in 2000, you can definitely argue that there is nothing new about REST. Additionally, the REST architectural style was synthesized from successful existing practices, so REST implementations pre-date the definition. Having said that, there is nothing simple about designing REST interfaces. Ever since Netscape first abused cookies to allow servers to maintain session state people have been swimming upstream against the web.
REST's recent resurrection has come mainly from people becoming disillusioned with SOAP based Web Services. SOAP tried to hide HTTP instead of embracing it and I think people are starting to realize how effective HTTP can be as an distributed application protocol that can do more than just deliver HTML to web browsers.
RESTful web applications don't use session state, so one could argue that by that virtue alone it is different than most web applications in existence at the moment.
As for Cloud Computing, I find myself agreeing with Larry Ellison for once in my life.
I'm in agreement on what you've posted. You might consider making this community wiki since it's likely to garner many answers based on opinion. Cloud computing seems to have taken off as a buzzword, and this is largely due to a decrease in cost for mass quantities of hardware. And then there is REST which is really just a formal name and definition for something that has been in place for a long time. Some people like to encapsulate ideas with buzzwords and acronyms. Sometimes it's useful to put a name to an idea though.
Not only this, the concept of things being old concepts with new names is old. It's hard to be original these days :P
You are right about REST -- its mostly old concepts with a lot of added pedantry and not much added substance.
Cloud computing has a small but fundamental difference from distributed computing. In distributed computing you had servers dedicated to particular functions, and usually some sort of directory service to locate the correct server. In cloud computing any server is capable of any task and usually the servers queue up for work which is distributed from a central point.
Related
I know theres already a good question on this, but it doesn't really answer what I'm looking for.
From what I understand:
1.both are used as a central focal point between applications
2.both can use routing/mediation/transformation etc. between services/apps
But the only difference i can really see is that hub and spoke typically have many different formats entering the hub(SOAP/REST/XML/JSON...) while ESB typically has a standard format(Usually just SOAP.)
Also I keep reading that hub and spoke introduces a single point of failure compared to an ESB. So is the physical deployment the difference here? Where a hub has every possible endpoint and as ESB has endpoints deployed across multiple hubs? So an ESB is just multiple hubs(for want of better words)?
Can anyone help clear this up for me?
There is no exact answer here, since you can talk about ESB as a specific design pattern, or as the discourse about the evolution of software integration tools and SOA.
ESB as a design pattern means that you manage communication between different services using a bus where clients can easily plug in and out. This is usually done by forcing them to use standard data formats and protocols, whereas with Hub and Spoke you might use custom connectors and data transformations for each client. This limits the number of problems you may have with running multiple integrations, but you may still have a single point of failure in ESB.
ESB as a discourse (or marketing term) is a more complex issue, where people argue over what is "True ESB". Some people say you need to have a modular architecture where you can select which components you deploy, or you need to be able to distribute the components across different machines to allow scaling and fault tolerance. In the extreme definition you would need to deploy even your data transformers as distributed services.
From Here
The ESB is the next generation of enterprise integration technology, taking over where EAI(hub-spoke) leaves off.
Smarter Endpoints : The ESB enables architectures in which more intelligence is placed at the point
where the application interfaces with the outside world. The ESB allows each endpoint to present
itself as a service using standards such as WSDL and obviates the need for a unique interface written
for each application. Integration intelligence can be deployed natively on the end-points (clients and
servers) themselves. Canonical formats are bypassed in favor of directly formatting the payload to
the targeted format. This approach effectively removes much of the complexity inherent in EAI
products.
Distributed Architecture : Where EAI is a purely hub and spoke approach, ESB is a lightweight
distributed architecture. A centralized hub made sense when each interaction among programs had
to be converted to a canonical format. An ESB, distributes much more of the
processing logic to the end points.
No integration stacks : As customers used EAI products to solve more problems, each vendor added
stacks of proprietary features wedded to the EAI product. Over time these integration stacks got
monolithic and require deep expertise to use. ESBs, in contrast, are a relatively thin layer of software
to which other processing layers can be applied using open standards. For example, if an ESB user
wants to deploy a particular business process management tool, it can be easily integrated with the
ESB using industry standard interfaces such as BPEL for coordinating business processes.
The immediate short-term advantage of the ESB approach is that it achieves the same overall effect
as the EAI(hub-spoke) approach, but at a much lower total-cost-of-ownership. These savings are realized not
only through reduced hardware and software expenses, but also via labor savings that are realized by
using a framework that is distributed and flexible.
I don't know if you mean this when you say is physical deployment the difference here? but actually the main difference between Hubs and ESP is that, its communication system is in different Layer.
When we talking about an ESP we are reffering to a software architecture model where as a hub is reffering to a strict hardware connecting topology.
Profiously this hardware topology, (a collection of hubs) implements an ESP, but there is a distinct line in communication layers between the two.
Let us imagine we have an orders processing system for a pizza shop to design and build.
The requirements are:
R1. The system should be client- and use-case-agnostic, which means that the system can be accessed by a client which was not taken into account during the initial design. For example, if the pizza shop decides that many of its customers use the Samsung Bada smartphones later, writing a client for Bada OS will not require rewriting the system's API and the system itself; or for instance, if it turns out that using iPads instead of Android devices is somehow better for delivery drivers, then it would be easy to create an iPad client and will not affect the system's API in any way;
R2. Reusability, which means that the system can be easily reconfigured without rewriting much code if the business process changes. For example, if later the pizza shop will start accepting payments online along with accepting cash by delivery drivers (accepting a payment before taking an order VS accepting a payment on delivery), then it would be easy to adapt the system to the new business process;
R3. High-availability and fault-tolerance, which means that the system should be online and should accept orders 24/7.
So, in order to meet R3 we could use Erlang/OTP and have the following architecture:
The problem here is that this kind of architecture has a lot of "hard-coded" functionality in it. If, for example, the pizza shop will move from accepting cash payments on delivery to accepting online payments before the order is placed, then it will take a lot of time and effort to rewrite the whole system and modify the system's API.
Moreover, if the pizza shop will need some enhancements to its CRM client, then again we would have to rewrite the API, the clients and the system itself.
So, the following architecture is aimed to solve those problems and thus to help meeting R1, R2 and R3:
Each 'service' in the system is a Webmachine webserver with a RESTful API. Such an approach has the following benefits:
all goodness of Erlang/OTP, since each Webmachine is an Erlang application, which can be supervised and can be put into an Erlang release;
service oriented architecture with all the benefits of SOA;
easy adaptable to changes in the business process;
easy to add new clients and new functions to clients (e.g. to the CRM client), because a client can use RESTful APIs of all the services in the system instead of one 'central' API (Service composability in terms of SOA).
So, essentially, the system architecture proposed in the second picture is a Service Oriented Architecture where each service has a RESTful API instead of a WSDL contract and where each service is an Erlang/OTP application.
And here are my questions:
Picture 2: Am I trying to reinvent the wheel here? Should I just stick with the pure Erlang/OTP architecture instead? ("Pure Erlang" means Erlang applications packed into a release, talking to each other via
gen_server:call and gen_server:cast function calls);
Can you name any disadvantages in suggested approach? (Picture 2)
Do you think it would be easier to maintain and grow (R1 and R2) a system like this (Picture 2) than a truly Erlang/OTP one?
The security of such a system (Picture 2) could be an issue, since there are many entry points open to the web (RESTful APIs of all services) instead of just one entry point (Picture 1), isn't it so?
Is it ok to have several 'orchestrating modules' in such a system or maybe some better practice exists? ("Accept orders", "CRM" and "Dispatch orders" services on Picture 2);
Does pure Erlang/OTP (Picture 1) have any advantages over this approach (Picture 2) in terms of message passing and the limitations of the protocol? (partly discussed in my previous similar question, gen_server:call VS HTTP RESTful calls)
The thing to keep in mind regarding SOA is that the architecture is not about the technology (REST, WS* ). So you can get a good SOA in place with endpoints of several types if/when needed (what I call an Edge component - separating business logic from other concerns like communications and protocols)
Also it is important to note that service boundary is a trust boundary so when you cross it you may need to authenticate and authorize, cross network etc. Additionally, separation into layers (like data and logic) shouldn't drive the way you partition your services.
So from what I am reading in your questions, I'd probably partition the services into more coarse grained services (see below). communications within the boundary of the service can be whatever, where-as communications across services uses a public API (REST or Erlang native is up to you, but the point is that it is managed, versioned, secured etc.) - again, a service may have endpoints in multiple technologies to facilitate different users (sometimes you'd use an ESB to mediate between services and protocols but the need for that depends on size and complexity of your system)
Regarding your specific questions
1 As noted above, I think theres a place to expose more public APIs
than just a single entry point, I am not sure that exposing each and
every capability as a service with public-api is the right way to go
see.
2&3 The disadvantages of exposing every little thing is
management overhead, decreased performance (e.g. you'd have to
authenticate on these calls). You get nano-services services
whose overhead is more than their utility.
One thing to add about the security is that the fact that some service has a REST API does not have to translate to having that API available to the general public. Deployment-wise you can keep it behind a firewall and restrict the access to it for known addresses
etc.
5 It is ok to have several orchestrating modules, though if you get beyond a few you should probably consider some orchestration module (and ESB or an Orchestration engine) alternatively you can use event based integration and get choreography based integration which is more flexible (but somewhat less manageable )
6 The first option has the advantage of easy development and probably better performance (if that's an issue). The hard-coded integration layer can prove harder to maintain over time. The erlang services, if you wrote them write should be able to evolve independently if you keep API integration and message passing between them (luckily Erland makes it relatively easy to get this right by its inherent features (e.g. immutability))
I'd introduce the third way that is rather more cost effective and change-reactive. The architecture definitely should be service oriented because you have services explicitly. But there's no requirement to expose each service as Restful or WSDL-defined one. I'm not an Erlang developer but I believe there's a way to invoke local and remote processes by messaging and thus avoid unnecessary serialisation/serialisation activities for internal calls. But one day you will be faced with new integration issue. For example you will be to integrate accounting or logistic system. Then if you designed architecture well regarding SOA principles the most efforts will be related to exposing existing service with RESTful front-end wrapper with no effort to refactor existing connections to other services. But the issue is to keep domain of responsibilities clean. I mean each service should be responsible to the activity it was originally designed.
The security issue you mentioned is known one. You should have authentication/authorization in all exposed services using tokens for example.
Does Restful Webservices have any service registries like the UDDI? Or can UDDI hold Restful Webservices as well?
UDDI can be used for REST services. WSDLs can be used to described HTTP web services, but frankly I feel that it's not a real match for a REST resource architecture.
At a most basic level, UDDI is simply mapping of attributes to service endpoints. So, if you're simply looking for a system that can do that, then UDDI will fit the bill.
UDDI is not popular in the wild, wide open internet, but it is used "behind the scenes" as an orchestration component.
As Darrel mentioned, DNS is another valid discovery mechanism.
My personal complaint with DNS is simply that even though DNS has all of the advantages that's mentioned in the article he cites, the downside is that DNS is such a critical part of the network fabric, it tends to not be available to developers. Typically, the network operations folks (who tend to be more notorious than even DBAs) hold infrastructure like DNS quite close. Finally, while DNS is quite capable of these tasks, in many cases the standard default configuration and deployment of DNS may need to be changed. For example, We've started serving certificates from DNS, for example, and we had to enable TCP for DNS. Again, this meant more involvement of network ops.
On top of that, while there is a lot of expertise and knowledge of DNS out in the world, knowledge and expertise of HTTP and "doing stuff" on a web server is far greater. That consequences of that simply means that when developers think about and look to some kind of solution to this problem, the first place they're going to look is likely an HTTP based solution.
So, in that sense UDDI is possibly a better solution, just in terms of being able to get it rolled out quickly with little hassle.
Of course, UDDI is a SOAP based service. That's not that big a deal, really. Not a great fit for a RESTful system, but it's not awful. Functional, if a little "impure".
As for a standard HTTP based service registry, there's nothing that I know of. It's reasonably simply to adhoc one simply with HTML, for example. The fact that UDDI hasn't taken off in the World at large isn't so much a limitation or slight against UDDI. Rather it's simply that the vision of discovering arbitrary services hasn't really come to fruition, the need simply isn't quite there. There's a lot more involved out of band with service discovery beyond location and semantics, like business relationships and such.
Internally, within the enterprise, those logistics are solved, so service discovery has value. Out in the wild, not so much.
It's not dead ;)
signed a jUDDI developer
juddi.apache.org
Edit: There's also WS-Discovery which is supported by both CXF and WCF. Worth checking out.
FWIW, UDDIv3 does specify a REST interface, however I don't think anyone other than jUDDI has implemented it. It will be included with v3.2 and up using CXF, Jettison and WADL. Source: http://svn.apache.org/repos/asf/juddi/trunk/juddi-rest-cxf/src/main/java/org/apache/juddi/api/impl/rest/UDDIInquiryJAXRS.java
UDDI was designed for SOAP services, however, it is not even used for those any more. UDDI was pretty much dead as of 2006.
This article shows how to use DNS to do discovery.
I am about to use Apache Hadoop, the headlines read:
The Apache Hadoop project develops
open-source software for reliable,
scalable, distributed computing.
I can relate "scalability" to programming, but I just don't know how this "distributing" can help me in my development. According to wikipedia:
A distributed system consists of
multiple autonomous computers that
communicate through a computer
network. The computers interact with
each other in order to achieve a
common goal
So does this mean I can deploy my web apps across multiple computers and do some sort of "intense computing"? The terms that come into my mind are Content Delivery Networks and Cloud Computing.
Web development has always been about distributed computing, since clients have been on different machines to the servers they talk to, web pages can pull in resources from many servers to build a page's content, and servers may talk to other machines to achieve their goals. CDNs make this more obvious than before, but really they're just an evolution, an introduction of a virtualization/indirection layer between what you ask for and the hardware used to provide it.
Clouds are about taking the concepts of virtualization and applying them to remote hosting, both of low-level OSes and higher-level software platforms. The really interesting thing about them is that this enables different business models on the part of customers (and with different risks too, but that's mostly not related to the fact that it's distributed computing but rather that it is not wholly under your control in your own jurisdiction).
I've found that the most effective use of distributed computing is when you think in terms of connecting together distinct services, each of which with different capabilities (which might be for technical reasons, or might not; sometimes, it's for business or legal reasons that things have to be divided up) and where each of those services may be provided by many components in multiple locations. There are, and continue to remain, issues with balancing the need for performance (which is a force that brings components together) and the need for robustness (which tends to lead to distribution and replication) within the overall context of the general capabilities map.
My goodness! That paragraph sounds like terrible piffle! What I'm trying to say is that it's all trade-offs, and you should be prepared for not getting it right first time.
(Hadoop is a mechanism for doing a distributed file store, and for efficiently applying certain classes of operation – those that fit well with MapReduce or other similar scatter-gather algorithms – across that whole dataset. If that shoe fits, use it. But it doesn't solve all problems, and thank goodness for that! Things that can do everything tend to look very much like things that can't actually do anything at all, and usefulness and comprehensibility come in the restrictions.)
Hadoop is typically used to process massive data sets by distributing the processing of that data set across multiple machines.
What this means is you probably don't want to use it to "deploy an application". You might use it to process stats on your application, however. For instance, you might have very large logs of user data. This would happen if your user data grows to become too large to fit on a single hard drive, and/or would take too long for one machine to process stats on (using standard methods like an SQL query).
Ygam. While the traditional role of "clients" and "servers" have been pretty stable from 1960 till about 2005.
I believe with every fiber of my being, that distributed computing is that we all carry processors around in our pockets.
Phones do computing work. Phones do NOT need centralized servers, but they DO benefit from them.
Phones , Smartphones, tablets are an example of where distributed computation is going.
You can make a wifi base-station out of an Android device now. So now a phone becomes a server of sorts for just that instant in the coffee shop that you turn it on for that cute person next to you without internet ....and now I digress.......
Related question: What is the most efficient way to break up a centralised database?
I'm going to try and make this question fairly general so it will benefit others.
About 3 years ago, I implemented an integrated CRM and website. Because I wanted to impress the customer, I implemented the cheapest architecture I could think of, which was to host the central database and website on the web server. I created a desktop application which communicates with the web server via a web service (this application runs from their main office).
In hindsight this was rather foolish, as now that the company has grown, their internet connection becomes slower and slower each month. Now, because of the speed issues, the desktop software times out on a regular basis, the customer is left with 3 options:
Purchase a faster internet connection.
Move the database (and website) to an in-house server.
Re-design the architecture so that the CRM and web databases are separate.
The first option is the "easiest", but certainly not the cheapest long term. Second option; if we move the website to in-house hosting, the client has to combat issues like overloaded/poor/offline internet connection, loss of power, etc. And the final option; the client is loathed to pay a whole whack of cash for me to re-design and re-code the architecture, and I can't afford to do this for free (I need to eat).
Is there any way to recover from when you've screwed up the design of a distributed system so bad, that none of the options work? Or is it a case of cutting your losses and just learning from the mistake? I feel terrible that there's no quick fix for this problem.
You didn't screw up. The customer wanted the cheapest option, you gave it to them, this is the cost that they put off. I hope you haven't assumed blame with your customer. If they're blaming you, it's a classic case of them paying for a Chevy while wanting a Mercedes.
Pursuant to that:
Your customer needs to make a business decision about what to do. Your job is to explain to them the consequences of each of the choices in as honest and professional a way as possible and leave the choice up to them.
Just remember, you didn't screw up! You provided for them a solution that served their needs for years, and they were happy with it until they exceeded the system's design basis. If they don't want to have to maintain the system's scalability again three years from now, they're going to have to be willing to pay for it now. Software isn't magic.
I wouldn't call it a screw up unless:
It was known how much traffic or performance requirements would grow. And
You deliberately designed the system to under-perform. And
You deliberately designed the system to be rigid and non adaptable to change.
A screw up would have been to over-engineer a highly complex system costing more than what the scale at the time demanded.
In fact it is good practice to only invest as much as can currently be leveraged by the business, using growth to fund further investment in scalability, should it be required. It is simple risk management.
Surely as the business has grown over time, presumably with the help of your software, they have also set aside something for the next level up. They should be thanking you for helping grow their business beyond expectations, and throwing money at you so you can help them carry through to the next level of growth.
All of those three options could be good. Which one is the best depends on cost benefits analysis, ROI etc. It is partially a technical decision but mostly a business one.
Congratulations on helping build a growing business up til now, and on to the future.
Are you sure that the cause of the timeouts is the internet connection, and not some performance issues in the web service / CRM system? By timeout I'm going to assume you mean something like ~30 seconds, in which case:
Either the internet connection is to blame and so you would see these sorts of timeouts to other websites (e.g. google), which is clearly unacceptable and so sorting the internet is your only real option.
Or the timeout is caused either by the desktop application, the web serice, or due to exessively large amounts of information being passed backwards and forwards, in which case you should either address the performance issue how you might any other bug, or look into ways of optimising the Desktop application so that less information is passed backwards and forwards.
In sort: the architecture that you currently have seems (fundamentally) fine to me, on the basis that (performance problems aside) access for the company to the CRM system should be comparable to accesss for the public to the system - as long as your customers have reasonable response times, so should the company.
Install a copy of the database on the local network. Then let the client software communicate with the local copy and let the database software do the synchronization between the local database server and the database on the webserver. It depends on which database you use, but some of them have tools to make that work. In MSSQL it is called replication.
First things first how much of the code do you really have to throw away? What language did you use for the Desktop client? Something .NET and you may be able to salvage a good chuck of the logic of the system and only need to redo the UI and some of the connections.
My thoughts are that 1 and 2 are out of the question, while 1 might be a good idea it doesn't solve the real problem. And we as engineers should try and build solutions not dependent on the client when ever possible. And 2 makes them get into something they aren't experts at and it is better to keep the hosting else where.
Also since you mention a web service is all you are really losing the UI? You can alway reuse the webservices for the web server interface.
Lastly you could look at using a framework to help provide a simple web based CRUD to start and then expand from there.
Are you sure the connection is saturated? You could be hitting all sorts of network, I/O and database problems... Unless you've already done so, use wireshark to analyze the traffic; measure the throughput and share the results with us.