So, I've created a MySQL database on AWS RDS, and written a client desktop application in a .NET application that uses Sockets to authenticate with, connect to, and manipulate the database, using the endpoint ("xxxxxx.rds.amazon.com") and username/password. This works great.
I was trying to see if I could accomplish something similar in client javascript. It seems like the analogous API it offers is WebSockets, which I am familiarizing myself with. However, it seems to me (mostly from the absence of how-to's on the web) that that endpoint ("xxxxxx.rds.amazon.com") is accessible via Sockets, but not via WebSockets -- and that there is not an alternate route to my MySQL database for WebSockets. Is this correct?
It makes sense that these are two different types of servers, but generally speaking internet resources are served out to Sockets, but not WebSockets? If that is true, they are not as analogous as I originally thought; is WebSockets mostly good for communicating between WebSockets clients and servers that I create? Or can it be used to access good stuff already existing on the internet, as Sockets can be?
(Note: this isn't asking for opinions on the best way to do this, I'm just confirming my impression of the specific way these technologies are used.)
Thank you.
There are 2 applications that need to communicate with each other. They are both running on the same PC.
Main Application (C#)
Helper Application (C#) -> launched from Main Application
Helper Application will modify some data used/contained by the Main Application. Can the helper application be a microservice? (not familiar with microservices, but I've saw this while checking on the net)
I found a helpful tutorial and was able to create a WCF Duplex Binding.
Now the Main Application and Helper Application can communicate.
I'm just wondering if this is a good solution (or a microservice is better??)
Can the helper application be a microservice? (not familiar with microservices...
Sure. "Microservices" is just the latest term that describes distributed component-based network computing. It goes back a long way to the days of (and possibly further) distributed COM (DCOM) and Corba; COM+ and finally service-orientated architecture (SOA). WCF used SOA as a best practice. In practice the only real difference between SOA and microservices is that the latter tend to adopt HTTP-REST-JSON as the transport/API/payload whereas the SOA generation is transport/payload neutral but generally using SOAP.
I found a helpful tutorial and was able to create a WCF Duplex Binding. Now the Main Application and Helper Application can communicate. I'm just wondering if this is a good solution (or a microservice is better??)
Well technically you are already using microservice/SOA.
I'm just wondering if this is a good solution
No. The problem with SOA/microservices on the same machine is that they are very chatty; have a high overhead; and their message payloads quite verbose. Both SOAP and REST utilise text messages (XML and JSON respectively) by default (which is large compared to binary).
If both client and server are on the same machine you are best to just use straight-up named-pipes and avoid WCF/REST. Communcation under named-pipes are binary and so are very compact; named pipes run in Kernal mode meaning it is very fast and as an added bonus when communicating locally, bypasses the network layer (as opposed to say TCP which will even for LOCALHOST).
In my iphone application I am calling (by SOAP post method) a web service which is written in .net and hosted on a server, and its all working fine. But my doubt is, can we write a web service in objective c? And host it on a server? so that we should be able to access it from any of the platforms like .net, php and objectiveC.
I read a fantastic tutorial regarding this question some time ago here.
To be honest, it can be quite difficult to really use this in a productive environment. If you want to get all the features and tools Apple gives to you (what seems to be the intention of your question), you'll have to use a Mac in order to run your service afterwards.
In my opionion using PHP for example (if you need a db also backed up by MySQL) is much easier. Almost all hosters support it and you won't have to worry about setting up a bunch of macs and connecting them via solid and stable cables to the internet (and with that: guarantee availability).
Yes. A web service is just some application that can provide a service over the web. As you can create an application in Objective C, it can be a web service the same as made in any other language.
You can make it run on any server where you have an objective C compiler, however, the framework you use may restrict your choices to the server (ie, you can write objective C on windows, but you wouldn't be able to use the NS framework)
Web services are not limited to a programming language, however you do need to find if there is any framework using objective-c can run on specific server. For example, iiS allows you to use Asp.net which could be implemented using C# or VB.Net.
From the clients who will consume web services, they don't have to be a specific type of device. I think that's the point of web services. The messages travel in between is formatted. For example, a SOAP message is using xml, and that would ensure the message travel on HTTP. Therefore no matter you use iPhone or Android or Blackberry, you should have no problem to make web service calls.
So in general, I think you have to see what kind of web services you want to create, and then see if Apple(I assume) can provide you with a good framework to do it. In terms of client side, as long as your web services are using XML or JSON, it should be well supported.
Hope it helps.
I'm looking for architectural patterns of server-side software, particularly web apps, that have been used for good reason in the real world. Here are some I can think of:
single-server: all parts of the app run on the same server (database, app, web server listening to port 80 etc.)
simple 2-tier: database runs on single server "DB", all other parts in an "appserver" tier, which may contain any number of servers. The tiers communicate via ODBC or such.
of this, variations (how many? can we enumerate them?) include single-master/multiple-slave DBs servers, and multi-master DB servers
3-tier: database runs on one tier, business objects and logic run on a second tier, presentation on a third tier, where 1 and 2 communicate via ODBC and 2 and 3 via some form of remote calls (e.g. RMI)
I seem to recall from some presentation that at one point, eBay had an architecture that had an app tier generate XML, which then was transformed into HTML in a separate tier. Is that common or an oddity?
a bunch of web apps use memcachedb or such to speed things up. A set of caching servers are arguably another tier perhaps?
Could you help me enumerate some of these patterns, or point to places where some have been described?
You might enjoy the decade-old but still relevant classic Building a Large-Scale E-commerce site with Apache and mod_perl. Their tiers were:
Load balancers
Reverse proxies
Web/app servers
Database servers
This is still the blueprint for large-scale sites. Even larger web-scale sites may need something more arcane, but this is the foundation for understanding even them.
Note that they used mod_perl, which means their web servers were their app servers. If you were using Java at that time, you would have run the app servers as a tier behind the web servers (and by 'web servers' i mean Apache, handling HTTP parsing, TLS, and static files; fetching and carrying, but no logic), and connected them with AJP. You might still do that today, but you would be more likely to just use the app servers as your web servers (ie no Apache at all, just JBoss or similar). App servers are now solid enough to do this, and you can rely on the reverse proxies and a content distribution network to do most of the fetching and carrying anyway.
As for a caching tier, the reverse proxies are a caching tier in front of the app servers, but they did app-level caching on the app server machines, with a federated cache (you'd use memcached or similar for this today). I think that's still a viable option today. I don't see a reason to partition your app-tier servers into dedicated app and cache servers; i'd be interested to hear of reasons to do that.
I don't think splitting the presentation and business logic in the app tier is an idea that ever really took off. Some projects probably do it, but i would imagine because of they have architecture astronauts in charge, rather than for any good reason. That said, it is common to have an app tier that makes heavy use of service tiers further back (this is SOA, i guess), and the ultimate extension of that is essentially a presentation/logic split, but with heterogenous logic servers, and the presentation server very much being in charge.
I have heard a lot of people talking recently about middleware, but what is the exact definition of middleware? When I look into middleware, I find a lot of information and some definitions, but while reading these information and definitions, it seems that mostly all 'wares' are in the middle of something. So, are all things middleware?
Or do you have an example of a ware that isn't middleware?
Lets say your company makes 4 different products, your client has another 3 different products from another 3 different companies.
Someday the client thought, why don't we integrate all our systems into one huge system. Ten minutes later their IT department said that will take 2 years.
You (the wise developer) said, why don't we just integrate all the different systems and make them work together? The client manager staring at you... You continued, we will use a Middleware, we will study the Inputs/Outputs of all different systems, the resources they use and then choose an appropriate Middleware framework.
Still explaining to the non tech manager
With Middleware framework in the middle, the first system will produce X stuff, the system Y and Z would consume those outputs and so on.
Middleware is a terribly nebulous term. What is "middleware" in one case won't be in another. In general, you can expect something classed as middleware to have the following characteristics:
Primarily (usually exclusively) software; usually doesn't need any specialized hardware.
If it weren't there, applications that depend on it would have to incorporate it as part of their application and would experience a lot of duplication.
Almost certainly connects two applications and passes data between them.
You'll notice that this is pretty much the same definition as an operating system. So, for instance, a TCP/IP stack or caching could be considered middleware. But your OS could provide the same features, too. Indeed, middleware can be thought of like a special extension to an operating system, specific to a set of applications that depend on it. It just provides a higher-level service.
Some examples of middleware:
distributed cache
message queue
transaction monitor
packet rewriter
automated backup system
Wikipedia has a quite good explanation: http://en.wikipedia.org/wiki/Middleware
It starts with
Middleware is computer software that connects software components or applications. The software consists of a set of services that allows multiple processes running on one or more machines to interact.
What is Middleware gives a few examples.
There are (at least) three different definitions I'm aware of
in business computing, middleware is messaging and integration software between applications and services
in gaming, middleware is pretty well anything that is provided by a third-party
in (some) embedded software systems, middleware provides services that applications use, which are composed out of the functions provided by the hardware abstraction layer - it sits between the application layer and the hardware abstraction layer.
Simply put Middleware is a software component which provides services to integrate disparate systems together.
In an complex enterprise environment, there are a number of challenges when you need to integrate two or more enterprise systems together to talk to each other. Normally these systems do not understand each others language as they are developed on different platforms using different languages (like C++, Java, Cobol, etc.).
So here comes middleware software in picture which provides services like
transformation of messages formats from one app to other,
routing and enriching messages besides taking care of security,
encryption,
validation and
applying different business rules to these messages.
A typical example of middleware is an ESB products like IBM message broker (WMB/IIB), WESB, Datapower XI50, Oracle Fusion, Mule and many others.
Therefore, middleware sits mostly in between the service consuming apps and services provider apps and help these apps to talk to each other.
Middleware is about how our application responds to incoming requests. Middlewares look into the incoming request, and make decisions based on this request. We can build entire applications only using middlewares. For e.g. ASP.NET is a web framework comprising of following chief HTTP middleware components.
Exception/error handling
Static file server
Authentication
MVC
As shown in the above diagram, there are various middleware components in ASP.NET which receive the incoming request, and redirect it to a C# class (in this case a controller class).
Middleware is a general term for software that serves to "glue together" separate, often complex and already existing, programs. Some software components that are frequently connected with middleware include enterprise applications and Web services.
There is a common definition in web application development which is (and I'm making this wording up but it seems to fit): A component which is designed to modify an HTTP request and/or response but does not (usually) serve the response in its entirety, designed to be chained together to form a pipeline of behavioral changes during request processing.
Examples of tasks that are commonly implemented by middleware:
Gzip response compression
HTTP authentication
Request logging
The key point here is that none of these is fully responsible for responding to the client. Instead each changes the behavior in some way as part of the pipeline, leaving the actual response to come from something later in the sequence (pipeline).
Usually, the middlewares are run before some sort of "router", which examines the request (often the path) and calls the appropriate code to generate the response.
Personally, I hate the term "middleware" for its genericity but it is in common use.
Here is an additional explanation specifically applicable to Ruby on Rails.
Middleware stands between web applications and web services that natively can't communicate and often are written in different languages/frameworks.
One such example is OWIN middleware for .NET environment, before owin people were forced to host web apps in a microsoft hosting software called IIS. After owin was developed, it has added capacity to host both in IIS and self host, in IIS was just added support for Owin which acted as an interface. Also it become possible to host .NET web apps on Linux via Mono, which again added support for Owin.
It also added capacity to create Single Page Applications, Owin handling Http request/response context, so on top of owin you can add authentication/authorization logic via OAuth2 for example, you can configure middleware to register a class which contains logic of user authentification (for ex. OAuth2 implementation) or class which contains logic of how to manage http request/response messages, that way you can make one application communicate with other applications/services via different data format (like json, xml, etc if you are targeting web).
Some examples of middleware: CORBA, Remote Method Invocation (RMI),...
The examples mentioned above are all pieces of software allowing you to take care of communication between different processes (either running on the same machine or distributed over e.g. the internet).
From my own experience with webwork, a middleware was stuff between users (the web browser) and the backend database. It was the software that took stuff that users put in (example: orders for iPads, did some magical business logic, i.e. check if there are enough iPads available to fill the order) and updated the backend database to reflect those changes.
It is just a piece of software or a tool on which your application executes and rapplication capabilities with respect to high availability,scalability,integrating with other softwares or systems without you bothering about your application level code changes .
For example : The operating system on which your application runs requires an I.P change , you do not have to worry about it in your code , it is the middleware stack on which you can simple update the configuration.
Example 2 : You experience problems with your runtime memory allocation and feel that the your application usage has increased , you do not have to much about it unless you have a bug or bottleneck in your code , it is easily achievable by tuning middleware software configuration on which your application runs.
Example 3 : You have multiple disparate software and you need them to talk to each other or send data in a common format which is understandable by all the systems then this is where middleware systems comes handy.
Hope the information provided helps.
it is a software layer between the operating system
and applications on each side of a distributed computing system in a network. In fact it connects heterogeneous network and software systems.
If I am not wrong, in software application framework, based on the context, you can consider middleware for the following roles that can be combined in order to perform certain activities in between the user request and the application response.
Adapter
Sanitizer
Validator
I always thought of it as the oldest software I have had to install. The total app used a web server, a database server, and an application server. The web server being the middleware between the data and the app.