Web Service multi-client basic question - service

I've been reading up on web service all day but I'm still missing a basic understanding on web services as they relate to multiple clients.
The web service runs on a web server. The service exposes various methods. Multiple clients may call the same service method simultaneously. Question: Does each client get its own copy of the method or does the code in the method implementation have to start a thread for each client and process each client's request in it's own thread? What am I missing?
Thanks in advance.
DP

It depends on configuration. In WCF you can configure 'singleton' ie. one service instance, which will work with all clients. Or you can set another value which will create separate instance for each client calling it. You will definitely get more at MSDN.
Edit:
Check this attribute: InstanceContextBehavior

Related

Spring Cloud: How to manage requests on Zuul to another services?

Actually I would like to understand correct approach for managing requests among several microservices, one of them is Zuul:
I have Zuul-app, which is proxy before my microservice. Zuul started on port 7777 and declares API like /api/service1/get or /api/service2/get. On every service I have echo-endpoint which is available localhost:7777/api/service1/get and work well.
But those echo-endpoints are available directly from corresponding services. Thus I can make request from Postman, let's say, to service1/get/ and service2/get
As far as I understand anybody can call those services through Zuul or directly from those services. So what is difference and what is real value of Zuul for such case (instead of Zuul can authorize users, let's say as proxy microservice)
So what is correct approach for using Zuul for microservices ?
Your question looks like you are asking two things. What is the purpose and how to use it. Going to answer the first one.
Its purpose is to be the service in front of all the other services you have. Like front door to your system.
Rest of the services should be hidden of outside world, behind proxy service.
The purpose is to route all the services from one place, so with netflix-zuul you are able to intercept the request, manipulate, authenticate, route...
You can integrate service discovery (netflix-eureka) so your services will be registered there, and you don't need to deal with urls of your services, you can access them by path you defined and registered service ids.
You can integrate load balancing (netflix-ribbon) across your system.
You can control the interactions between your services by adding latency tolerance and fault tolerance logic (netflix-hystrix). So you can provide fallback options when error occurs..
And so on...

Can a process be both service provider and client on D-Bus

I know typically a process is either a service provider or client over D-Bus, is it practically possible that a process be both a service and client (I think it's okay)? I have such needs in my project, originally there is a service provider and client, some requirements come in, I need the original client to provide service as well. Is there any downside if it's theoretically doable?
Yes, it’s possible, straightforward to do, and there are no downsides as long as it’s a suitable architecture for the problem you’re trying to solve.
Many system services already do just this: they expose a system service on the bus, and also act as a client with other system services which provide information to them.

How often does RESTful client pull server data

I have a RESTful web-service application that I developed using the Netbeans IDE. The application uses MySQL server as its back end server. What I am wondering now is how often a client application that uses my RESTful application would refresh to reflect the data change in the server.
Are there any default pull intervals that clients get from the RESTful application? Does the framework(JAX-RS) do something about it Or is that my business to take care of.
Thanks in advance
#Abraham
There are no such rules. Only thing you can use for properly implementing this is HTTP's caching capabilities. Service must include control information how long representation of a particular resource can be cached, revalidated, never cached etc...
On client application side of things each client may decide it's own path how it will keep itself in sync with a service. It can be done by locally storing data and serve end user from local cache etc... Service can not(and shouldn't know) how clients are implemented, only thing service can do is to include caching information in response messages as i already mentioned above.
It is your responsibility to schedule the service to execute again and again. We can set time out interval but there is no pull interval.

Delivery different kind of protocols in a SOA architecture

I have a project that is currently in production delivering some web-services using the REST approach. Right now, I need to delivery some of this web-services in SOAP too (it means that I will need to deliver some of the same web-services in SOAP and others a bit different), so, I ask you:
Should I incorporate to the existent project the SOAP stack (libraries, configuration files, ...), building another layer that deliver the data in envelopes way (some people call it "anti-corruption layer") ?
Should I build another project using just the canonical model in common (become it in a shared-library) ?
... Or how do you proceed in similar situations ?
Please, consider our ideal target a SOA architecture.
Thanks.
In our projects we have a facade layer which exposes the services and maps to business entities, and a business layer where the business logic is run.
So to add a SOAP end point for an existing service, we just create a new facade and call in to the same business logic.
In many cases it is even simpler, since we use WCF we can have a http SOAP endpoint for external clients, and a binary tcpip endpoint for internal clients. The new endpoint can be added by changing the configuration without any need to change the code.
The way I think about an SOA system, you have messages and pub/sub. The message is the interface. Getting those messages into and out of the system is an implementation detail. I create an endpoint that accepts a raw message document (more REST-like, but not really REST) as well as an endpoint that accepts the message as a single parameter to a SOAP call. The code that processes the incoming message is a separate concern from the HTTP endpoint enablement.
You can use an ESB for this. Where ESB receive the soap messages and send the rest request to the back end. WSO2 ESB provides this functionality. Please look at this sample[1].
[1] http://wso2.org/project/esb/java/4.0.0/docs/samples/proxy_samples.html#Sample152

Forward HTTP RESTfull API requests from http server to my application

I have a question about the design of an application I'm working on.
I made a monolithic java application with sockets open 24/7, something like a game server. I'm just trying to say it's a single jar application instead of a modular servlet/page based web application.
I would now like to add a RESTful API to this application. So people/clients can make HTTP requests to my application to obtain certain info. Because of the monolithic nature of my java application I'm unsure of how to implement this. One other important thing: I'm expecting multiple requests per second, so it would be nice if I could have an existing http server handle the requests, and somehow forward them to my app to set up a reply, and have the http server send it again.
Some things I have thought of:
wrap my application in a tomcat application, although I'm not sure if tomcat can run an application continuously instead of mapping to servlets on request.
open a socket and parse incoming http requests myself (or there is propably a lib for that?). I fear this will have an impact on performance, and would rather use existing http servers because they are optimized for high traffic.
use an excisting http server to handle the requests (apache, lighttp, ...) and have it forward requests to my app via things like scgi, or use a server that can forward via XMLRPC. Are there any other technologies/protocols to do this?
Any advice on how to handle this?
Thanks!
I'd decouple your RESTful service endpoint as much as possible from your original application. This allows you to scale (add multiple servers for your REST endpoint), but also to change your original application without having to change your REST API directly.
Clients <== REST (HTTP) ==> RESTful endpoint <== legacy (sockets) ==> Legacy backend
So your REST server is one the hand a service provider for your clients, but represents at the same time also a client for your original backend.
I would design the RESTful API and then pick one of the existing REST frameworks for Java, like Restlet, and implement the REST service itself. At the same time you can start implementing a gateway between the REST server and your original backend, by using sockets.
Pay attention to scalability and performance (i.e. you may want to use connection pools for the rest <=> backend bridge and not spawn a socket per incoming API request) and also think of possible advantages of HTTP. You might benefit when you're able to use caching, etc. as far as your backend application logic allows so.