Distributed tracing from the mobile application to the backend and include custom info - trace

We have Android and iOS (Objective-C) mobile applications.
Our Business Intelligence team is interested in receiving the following parameters with each data event:
the app version
the user session id
We use gRPC for both the mobile-backend communication and also for communicating between the different microservices on the backend.
I am considering sending this information using open tracing spans that are started on the mobile app and pass the app version and session id trough the Baggage.
Can anyone advise if open tracing is suitable for this scenario or if there is a better alternative?
We are also considering using LinkerD on the backend

I'm not sure baggage is really what you want. OpenTracing also offers the possibility of adding tags to a span, which would probably be sufficient for your use-case.
A baggage item is sent downstream along with the span context, whereas a tag is "local" to a span. If you need access to the app version on a span downstream, then you indeed have to use a baggage item, but if all you need is to have the version information within the span, you should just tag it.
About OpenTracing being suitable or not: I'd say that's exactly the purpose of OpenTracing. Not only you'll potentially get "automatic" spans from the frameworks you are using (using the "framework integrations"), but also you can relate that to your business information. We have an example on Hawkular APM where we add both "operational" and "business" data to traces.

Related

Rest API Localization - Headers vs Payload

We have one POST API live in production. Now we have a requirement to accept Localization information and proceed with execution accordingly.
e.g. if distanceUnit is "KM" then process all incoming data in Kilometers.
There are three options I could think of to accept localization information.
As a http header i.e. localization: {"distanceUnit": "km"}
As a part of payload itself.
Request parameter.
I like the 1st option as
it doesn't change api contract.
It's easier for other apis to send this info in case they need to be localized in future.
Localization is a part of content negotiation so I don't think it should be part of payload/query parameter.
Any opinions here would be helpful to zero in on 1st or second option.
Thanks.
While accept-language, as indicated by the proposed link Kit posted, may be attempting, this only supports registered languages, maintained by IANA, the standadization gremium of the Web, but not certain generic configuration options out of the box. It may be attempting to default to miles for i.e. Accept-Language: us and use km elsewhere, American scientists may have certain issues with your application then if they want to use km instead of miles. But if this might not be the case, this clearly could be an option you might consider. In regards to custom HTTP headers, I wouldn't recommend using those as the problem with custom HTTP headers in general is that arbitrary generic HTTP clients do not support these which somehow contradicts the idea why one should use a REST architecture.
Let us transfer your problem to the Web domain for a second and see how we usually solve that task there. As REST is basically just a generalized approach to the common way we humans interact with the Web, any concepts used on the Web also apply to a REST architecture. Thus, designing the whole interaction flow as if your application interacts on a typical Web page is just common practice (or at least should be).
On the Web a so called Web form is used to "teach" a Web client (a.k.a. Browser) what data the server expects as input. It not only teaches the client about the respective properties the server either expects or supports for a certain resource but also which HTTP method to use, about the target URI to send the request to and about the media-type to use, which implicitly is often just given as application/x-www-form-urlencoded but may also be multipart/form-data.
The usage of forms and links fall into the HATEOAS constraint where these concpets allow clients to progress through their task, i.e. of buying an item in a Web shop or administrating users in a system, without the need of ever having to consult an external documentation at all. Applications here basically just use the build-in hypermedia capabilities to progress through their tasks. Clients usually follow some kind of predefined processes where the server instructs clients on what they need to do in order to add an item to the shopping cart or on how to add or edit a user while still just operating on a generic HTML document that by itself isn't tailored to the respective task at hands. This approach allows Web clients to basically render all kinds of pages and users to interact with those generic pages. If something in that page representation changes your browser will automatically adept and render the new version on the next request. Hence, the system is able to evolve over time and adapt to changes easily. This is probably one of the core reasons why anyone wants to use a REST architecture basically.
So, back to the topic. On the Web a server would advertise to a client that it supports various localization information with above mentioned forms. A user might be presented a choice or dropdown option where s/he can select the appropriate option. The user usually does not care how this input is transferred to the server or about the internals of the server at all. All s/he cares for is that the data will be available after the request was submitted (in case of adding or updating a resource). This also holds true for application in a REST architecture.
You might see a pattern here. REST and the browsable Web are basically the same thing. The latter though focuses on human interaction while the primer one should allow applications to "surf the Web" and follow allong processes outlined by the server (semi-)automatically. As such it should be clear by now that the same concepts that apply to the browsable Web also apply to REST and applications in that REST architecture.
I like the 1st option as ... it doesn't change api contract
Clients shouldn't bind to a particular API as this creates coupling, which REST tries to avoid at all costs. Instead of directly binding to an API, the Web and as such also REST should use contracts build on hyper media types that define the admissible syntax and semantics of messages exchanged. By abstracting the contract away from the API itself to the media-type a client can support various contracts simultaneously. The generalization of the media-type furthermore allows to i.e. express various different things with the same media type and thus increase the likelihood for reusage and thus a better integration support into application layers.
Supporting various media-types is similar to speaking different languages. By being able to speak various languages you just increase the likelihood that you will be able to communicate with other people (services) out of the box without the need of learning those languages before. A client can tell a server via the Accept header which media-types it is able to "speak( (a.k.a. process) and the server will either respond with either of these or respond with a 406 Not Acceptable. That error response is, as Jim Webber put it, coordination data that at all times tells you whether everything went well or in case of failures gives you feedback on what went wrong.
In order to stay future-proof I therefore would suggest to design the configuration around hypertext enabled media types that support forms, i.e. HTML forms, applicaiton/hal-forms+json or application/ion+json. If in future you need to add further configuration options adding these is just a trivial task. Whether that configuration is exposed as own resource which you just link to, as embedded part within the resource or not return to the client at all is also a choice you have. If the same configuration may be used by multiple resources it would be benefitial to expose it as own resource and then just create a reference from the resource to that configuration but as mentioned these are design decisions you have to make.
If the POST request body is the only place where this is used, and you never have to do GET requests and automatically apply any conversion, my preference would probably go to adding it to the body.
It's nice to have a full document that contains all the information to describe itself, without requiring external out-of-band data to fully interpret its meaning.
You might like to define your schema to always include the unit in relevant parts of the document, for example:
distance: [5, 'km']
or, as you said, do it once at the top of the doc.

Managing UI requirements in a microservice architecture

We have different client applications (each is built with a different UI and is targeted to a different sales channels) that are used to capture orders that ultimately need to be processed by our factory.
At first we decided to offer a single "order" microservice that would be used by all these client applications for business rules execution and data storage. This microservice will also trigger our backoffice processes such as client profile update, order analysis, documents storage to our electronic vault, invoicing, communications, etc.
The challenge we are facing is that these client applications are developed by teams that are external to ours (we are a backoffice team only). Each team responsible to develop a client application will be able to offer a different UX to their users (some will allow to save orders in an incomplete state, some wil allow to capture data using a specific worflow, some will use text fields instead of listboxes for some values, etc.).
This diversity of behaviors from client applications is an issue because our microservice logic will become very complex to be able to support all those UI requirements. Moreover, everytime a change will be made to one of the client applications, we will have to modify our microservice which is a case of strong coupling.
My questions are: What would be your best advice to manage this issue? Should we let each application capture the data the way it wants (and persist it if needed in its own database) and let them call our microservice only when an order is complete and compliant to our API contract?
Should we keep our idea of having a single "order" microservice for everyone and force each client application to capture the data the same way?
Any other option?
We want to reduce the duplication of data and business rules in our ecosystem but in the same time we don't want our 'order' microservice to become a mess.
Many thanks for your help.
Moreover, everytime a change will be made to one of the client applications, we will have to modify our microservice which is a case of strong coupling.
This rings alarm bells for me. A change to a UI shouldn't require a change to a backend service. (The exception would be if a new feature were being added to a system and the backend service needed to play a part in supporting that feature, but I wouldn't just call that a change to a client.) As you have said, it's strong coupling, and that's something to be avoided in a microservices environment.
Ideally, your service should provide a generic, programmatic API that is flexible enough to support multiple UIs (or other non-UI applications) without having any knowledge of how the UIs work.
It sounds like you have some decisions to make about what responsibilities your service will and won't take on:
Does it make more sense for your generic orders service to facilitate the storage/retrieval/completion of incomplete orders, or to force its clients to manage this somewhere else?
Does it make more sense for your generic service to provide facilities to assist in the tracking of workflows, or to force the UIs that need that functionality to find it elsewhere?
For clients that want to show list boxes, does it make sense for your generic orders service to provide APIs that aid in populating those boxes?
Should we let each application capture the data the way it wants (and persist it if needed in its own database) and let them call our microservice only when an order is complete and compliant to our API contract?
It really depends on whether you think that's the most sensible way for your service to behave. Something that will play into that will be how similar or dissimilar the needs of each UI is. If 4 out of 5 UIs have the same needs, it could well make sense to support that generically in your service. If every single UI behaves differently to the others, putting that functionality in your generic orders service would amount to storing frontend code somewhere that it doesn't belong.
It seems like there might also be some organisational considerations to these decisions. If the teams using your service are only frontend teams (i.e. without capacity/skills to build backend services), then someone will still have to build the backend functionality they require.
Should we keep our idea of having a single "order" microservice for everyone and force each client application to capture the data the same way?
Yes to the idea of having a single order service with a generic interface for everyone. With regards to forcing client applications to capture data a certain way, your API will only dictate what they need to do to create an order. You can't (and shouldn't) force anything on them about the way they capture the data before calling your service. They can do it however they like. The questions are really around whether your service supports various models of capture or pushes that responsibility back to the frontend.
What would be your best advice to manage this issue?
Collaborate with the teams that will use the service. Gather as much information as you can about the use cases in which they intend to use it. Discover what is common for the majority and choose what of that you will support. Create a semi-formal spec (e.g. well-documented Open API), share it with the client teams, ask for feedback, and iterate. For the parts of the UIs that aren't common across clients, strongly consider telling those teams they'll need to support those elements of their design themselves, especially if they represent significant work on your end.

Is web-based real-time communication incompatible with REST paradigm?

Web-applications experienced a great paradigm shift over the last years.
A decade ago (and unfortunately even nowadays), web-applications lived only in heavyweighted servers, processing everything from data to presentation formats and sending to dumb clients which only rendered the output of the server (browsers).
Then AJAX joined the game and web-applications started to turn into something that lived between the server and the browser.
During the climax of AJAX, the web-application logic started to live entirely on the browser. I think this was when HTTP RESTful API's started to emerge. Suddenly every new service had its kind-of RESTful API, and suddenly JavaScript MV* frameworks started popping like popcorns. The use of mobile devices also greatly increased, and REST fits just great for these kind of scenarios. I say "kind-of RESTful" here because almost every API that claims to be REST, isn't. But that's an entirely different story.
In fact, I became a sort of a "REST evangelist".
When I thought that web-applications couldn't evolve much more, a new era seems to be dawning: Stateful persistent connection web-applications.
Meteor is an example of a brilliant framework of that kind of applications. Then I saw this video. In this video Matt Debergalis talks about Meteor and both do a fantastic job!
However he is kind of bringing down REST API's for this kind of purposes in favor of persistent real-time connections.
I would like very much to have real-time model updates, for example, but still having all the REST awesomeness.
Streaming REST API's seem like what I need (firehose.io and Twitter's API, for example), But there is very few info on this new kind of API's.
So my question is:
Is web-based real-time communication incompatible with REST paradigm?
(Sorry for the long introductory text, but I thought that this question would only make sense with some context)
Stateful persistent tcp/ip connections for web-applications are great, as long as you are not moving around.
I have developed a real-time web based framework and in my experience I found that when using a mobile device based web browser, the IP address kept changing as I moved from tower to tower, or, wi-fi to wi-fi.
When IP addresses keep changing, the notion that it is a persistent connection evaporates rather quickly.
The framework for real-time web-app has to be architected with the assumption that connections will be transient and the framework must implement its own notion of a session while the underlying IP connection to the back-end keeps changing.
Once a session has been defined and used in all requests and responses between clients and servers, one essentially has a 'web connection'. And now, one can develop real-time web based apps using the REST paradigm.
The back-end server of the framework has to be intelligent to queue up updates while IP addresses are undergoing transitions and then sync-up when tcp/ip connections has been re-established.
The short answer is, 'Yes, you can do real-time web based app using the REST paradigm'.
If you want to play with one, let me know.
I'm very interested in this subject too. This post has a few links to papers that discuss some of the troubles with poorly-designed RPC:
http://thomasdavis.github.com/2012/04/11/the-obligatory-refutation-of-rpc.html
I am not saying Meteor is poorly designed, because I do not know much about Meteor.
In any case, I think I want the best of both "world". I want to benefit from REST and all that it affords with the constrained generic interface, addressability, statelessness, etc.
And, I don't want to get left behind in this "real-time" web revolution either! It's definitely very awesome.
I am wondering if there is not a hybrid approach that can work:
RESTful endpoints can allow a client to enter a space, and follow links to related documents as HATEOAS calls for. But, for the "stream of updates" to a resource, perhaps the "subcription name" could itself be a URI, which when browsed to in a point-in-time single request, like through the web browser's address bar or curl, would return either a representation of the "current state", or a list of links with the href for prior states of the resource and/or a way to query the discrete "events" that have occured against the object.
In this way, if you state with the "version 1" of the entity, and then replay each of the events against it, you can mutate it up to its "current state", and this events could be streamed into a client that does not want to get complete representations just because one small part of an entity has changed. This is basically the concept of an "event store", which is covered in lots of the CQRS info out there.
As far as being REST-compatible, I believe this approach has been done (though I'm not sure about the streaming side of this), I cannot remember if it was in this book http://shop.oreilly.com/product/9780596805838.do (REST in Practice), or in a presentation I heard by Vaughn Vernon at this recorded talk in QCon 2010: http://www.infoq.com/presentations/RESTful-SOA-DDD.
He talked about a URI design something like this (I don't remember exactly)
host/entity <-- current version of a resource
host/entity/events <-- list of events that have happened to mutate the object into its current state
Example:
host/entity/events/1 <-- this would correspond to the creation of the entity
host/entity/events/2 <-- this would correspond to the second event ever against the entity
He may have also had something there for history, the complete monent-in-time state, like:
host/entity/version/2 <-- this would be the entire state of the entity after the event 2 above.
Vaughn recently published a book, Implementing Domain-Driven Design, which from the table of contents looks like it covers REST and event-driven architecture: http://www.amazon.com/gp/product/0321834577
I'm the author of http://firehose.io/, a realtime framework based on the premise that Streaming RESTful API's can and should exist. From the project website:
Firehose is a minimally invasive way of building realtime web apps
without complex protocols or rewriting your app from scratch. Its a
dirt simple pub/sub server that keeps client-side Javascript models in
sync with the server code via WebSockets or HTTP long polling. It
fully embraces RESTful design patterns, which means you'll end up with
a nice API after you build your app.
Its my hope that this framework prevents the Internet from going back into the RPC dark ages, but we'll see what happens. We do use Firehose.io in production at Poll Everywhere to push millions of messages per day to people on all sorts of different devices. It works.

Alternative to building a proper web service for iPhone app to consume

I am in the process of scoping the development of an iPhone app for a client. Among other things, the app will allow users to browse through and place orders on specific (tangible) products.
The client has a website that currently does a similar thing and due to their limited budget and the fact that the website runs on a third-party proprietary platform which they have no control over, we are investigating possible alternatives to building a web service.
On the website, user registration and authentication, as well as order placing is done through POST requests via secure HTTP. The response is always a formatted HTML page which will contain strings indicating whether the request was successful or not, and if there was an error, what the error is etc.
So provided I can replicate the POST requests on the phone, and parse the HTML responses to read the results of each request, do you think this is an acceptable alternative to building a web service to handle this?
Apart from the possibility of pages changing (which we can manage) and the fact that I will probably have to download and parse a relatively large HTML response, are there any other drawbacks to this solution and is there anything else that I might be missing?
Many thanks in advance for your thoughts.
Cheers,
Rog
You could create an intermediary server that will communicate with the client server, and on it expose some REST web services with json (small overhead and easy to handle) responses that will be consumed by the iPhone app.
So, you're going to parse HTML and formulate POSTs off a third-party server, and pray that they don't even so much as rename a form field.
Your question is in two parts:
Do I think that a miracle is an acceptable solution? I don't.
Do I think that aside from the fact a miracle is required, are there any other drawbacks? None that I can think of.
You didn't ask, but this is a terrible course of action. Two suggestions.
I spy an assumption that the providers of the third-party platform aren't interested in enabling third-party applications by providing an API. They have a very good business reason for this, which is that it promotes platform lock-in. Reach out to their support department and have a talk with them.
You have to sell the client on building an intermediary web service. To at least try to mitigate the damage that changes on this third-party platform can do to your app, I recommend that you build and operate a proxy that receives requests from your applications, and proxies them over to the third-party platform. You should build into this client-server protocol a means for returning "we are in maintenance mode, go away" messages to apps, for that inevitable day when the third-party server changes something that breaks your app (they swapped the billing and shipping address pages, for instance) and you have to rush through an update through Apple to deal with it.
The proxy could be written in something more flexible and easy to bash stuff out in, such as PHP, Python, Perl, or Ruby. It could be hosted at Amazon in a micro instance.
p.s. This question is inappropriately tagged objective C.
HTML is the worst because of parsing (1-2secs per page), memory, and changes, but you already know that. Check in advance that ALL the data you need is exposed on the HTML.
If you use an intermediary server you are moving work elsewhere and you have another server to maintain. I would only do that if memory is an issue. Check How To Choose The Best XML Parser for Your iPhone Project for memory/performance/xpath support. libxml2 is a good option, but it depends on your needs. And maybe you'll want to check ASIHTTPRequest features before using the SDK.
I think utilising the web language of JSON would contribute to the diminishing of the parsing time. By building a REST service that, when sent a GET request, returns the correct information for easy sorting, you could then display the output a lot faster than that of parsing straight HTML.
I prefer JSON over XML, but everyone has their personal preference. You should look at a few very good libraries that are built specifically for parsing purposes of both XML and JSON.
For XML I recommend using the inbuilt libxml parser. Albeit, this can sometimes deem very difficult to use. A simple Google search will bring up a heap of results that relate specifically to what parser should be used depending on what task is to be completed.
As for a JSON parser, I recommend SBJSON. I am currently using it one of the biggest projects I have undertaken and it is definitely working perfectly for my use.
If you need a good way to connect to a RESTful web service, you should try LRResty.
Don't go for a parsing solution on the iPhone for 4 reasons:
Server can change their design and break your application (AppStore submition is long) + They can also detect that the request are sent from an application based on user agent which you have to update the application to change it.
Some of the requests might be made thru Javascript so you not only have to parse (X)HTML but also Javascript request (which can be in the form of XMLHttpRequest, but don't have to)
Long term evolution of the mobile market : maybe your client want (or will want) an application for android, Blackberry, Bada OS (Samsung), Symbian (Nokia/ OVIStore), Java Mobile or Windows Phone 7?
Of course network traffic, Memory and CPU needed to parse HTML (look the time it takes to the browser to do it?)
Regarding the traffic, if the application will not have a huge traffic you can home-host your proxy. Or you can find some provider to host it for you. I guess you won't need more than a couple of Megabytes of storage but maybe traffic. For less than 100€/year you can find some with unlimited traffic (like OVH Pro plan or Infomaniak). But if you want to go Java have a look at Google App Engine : you pay only if your traffic is important and if your application generate many CPU Cycles. If not : you don't have to pay. And it's hosted on Google server : reliable.
If the client is open, you could consider the paypal API.

What is middleware exactly?

I have heard a lot of people talking recently about middleware, but what is the exact definition of middleware? When I look into middleware, I find a lot of information and some definitions, but while reading these information and definitions, it seems that mostly all 'wares' are in the middle of something. So, are all things middleware?
Or do you have an example of a ware that isn't middleware?
Lets say your company makes 4 different products, your client has another 3 different products from another 3 different companies.
Someday the client thought, why don't we integrate all our systems into one huge system. Ten minutes later their IT department said that will take 2 years.
You (the wise developer) said, why don't we just integrate all the different systems and make them work together? The client manager staring at you... You continued, we will use a Middleware, we will study the Inputs/Outputs of all different systems, the resources they use and then choose an appropriate Middleware framework.
Still explaining to the non tech manager
With Middleware framework in the middle, the first system will produce X stuff, the system Y and Z would consume those outputs and so on.
Middleware is a terribly nebulous term. What is "middleware" in one case won't be in another. In general, you can expect something classed as middleware to have the following characteristics:
Primarily (usually exclusively) software; usually doesn't need any specialized hardware.
If it weren't there, applications that depend on it would have to incorporate it as part of their application and would experience a lot of duplication.
Almost certainly connects two applications and passes data between them.
You'll notice that this is pretty much the same definition as an operating system. So, for instance, a TCP/IP stack or caching could be considered middleware. But your OS could provide the same features, too. Indeed, middleware can be thought of like a special extension to an operating system, specific to a set of applications that depend on it. It just provides a higher-level service.
Some examples of middleware:
distributed cache
message queue
transaction monitor
packet rewriter
automated backup system
Wikipedia has a quite good explanation: http://en.wikipedia.org/wiki/Middleware
It starts with
Middleware is computer software that connects software components or applications. The software consists of a set of services that allows multiple processes running on one or more machines to interact.
What is Middleware gives a few examples.
There are (at least) three different definitions I'm aware of
in business computing, middleware is messaging and integration software between applications and services
in gaming, middleware is pretty well anything that is provided by a third-party
in (some) embedded software systems, middleware provides services that applications use, which are composed out of the functions provided by the hardware abstraction layer - it sits between the application layer and the hardware abstraction layer.
Simply put Middleware is a software component which provides services to integrate disparate systems together.
In an complex enterprise environment, there are a number of challenges when you need to integrate two or more enterprise systems together to talk to each other. Normally these systems do not understand each others language as they are developed on different platforms using different languages (like C++, Java, Cobol, etc.).
So here comes middleware software in picture which provides services like
transformation of messages formats from one app to other,
routing and enriching messages besides taking care of security,
encryption,
validation and
applying different business rules to these messages.
A typical example of middleware is an ESB products like IBM message broker (WMB/IIB), WESB, Datapower XI50, Oracle Fusion, Mule and many others.
Therefore, middleware sits mostly in between the service consuming apps and services provider apps and help these apps to talk to each other.
Middleware is about how our application responds to incoming requests. Middlewares look into the incoming request, and make decisions based on this request. We can build entire applications only using middlewares. For e.g. ASP.NET is a web framework comprising of following chief HTTP middleware components.
Exception/error handling
Static file server
Authentication
MVC
As shown in the above diagram, there are various middleware components in ASP.NET which receive the incoming request, and redirect it to a C# class (in this case a controller class).
Middleware is a general term for software that serves to "glue together" separate, often complex and already existing, programs. Some software components that are frequently connected with middleware include enterprise applications and Web services.
There is a common definition in web application development which is (and I'm making this wording up but it seems to fit): A component which is designed to modify an HTTP request and/or response but does not (usually) serve the response in its entirety, designed to be chained together to form a pipeline of behavioral changes during request processing.
Examples of tasks that are commonly implemented by middleware:
Gzip response compression
HTTP authentication
Request logging
The key point here is that none of these is fully responsible for responding to the client. Instead each changes the behavior in some way as part of the pipeline, leaving the actual response to come from something later in the sequence (pipeline).
Usually, the middlewares are run before some sort of "router", which examines the request (often the path) and calls the appropriate code to generate the response.
Personally, I hate the term "middleware" for its genericity but it is in common use.
Here is an additional explanation specifically applicable to Ruby on Rails.
Middleware stands between web applications and web services that natively can't communicate and often are written in different languages/frameworks.
One such example is OWIN middleware for .NET environment, before owin people were forced to host web apps in a microsoft hosting software called IIS. After owin was developed, it has added capacity to host both in IIS and self host, in IIS was just added support for Owin which acted as an interface. Also it become possible to host .NET web apps on Linux via Mono, which again added support for Owin.
It also added capacity to create Single Page Applications, Owin handling Http request/response context, so on top of owin you can add authentication/authorization logic via OAuth2 for example, you can configure middleware to register a class which contains logic of user authentification (for ex. OAuth2 implementation) or class which contains logic of how to manage http request/response messages, that way you can make one application communicate with other applications/services via different data format (like json, xml, etc if you are targeting web).
Some examples of middleware: CORBA, Remote Method Invocation (RMI),...
The examples mentioned above are all pieces of software allowing you to take care of communication between different processes (either running on the same machine or distributed over e.g. the internet).
From my own experience with webwork, a middleware was stuff between users (the web browser) and the backend database. It was the software that took stuff that users put in (example: orders for iPads, did some magical business logic, i.e. check if there are enough iPads available to fill the order) and updated the backend database to reflect those changes.
It is just a piece of software or a tool on which your application executes and rapplication capabilities with respect to high availability,scalability,integrating with other softwares or systems without you bothering about your application level code changes .
For example : The operating system on which your application runs requires an I.P change , you do not have to worry about it in your code , it is the middleware stack on which you can simple update the configuration.
Example 2 : You experience problems with your runtime memory allocation and feel that the your application usage has increased , you do not have to much about it unless you have a bug or bottleneck in your code , it is easily achievable by tuning middleware software configuration on which your application runs.
Example 3 : You have multiple disparate software and you need them to talk to each other or send data in a common format which is understandable by all the systems then this is where middleware systems comes handy.
Hope the information provided helps.
it is a software layer between the operating system
and applications on each side of a distributed computing system in a network. In fact it connects heterogeneous network and software systems.
If I am not wrong, in software application framework, based on the context, you can consider middleware for the following roles that can be combined in order to perform certain activities in between the user request and the application response.
Adapter
Sanitizer
Validator
I always thought of it as the oldest software I have had to install. The total app used a web server, a database server, and an application server. The web server being the middleware between the data and the app.