I'm looking to migrate my Cloud Services to SF, which include a WebApi (WCF based) and an MVC WebUI (MVC 5.2), as well as a number of worker roles. I've seen a few different sources state the following to be true:
You can host WCF WebApi's in SF
You can host MVC v5.x WebUI's in SF
You can host the above and allow them to share publicly exposed ports 80/443 from a single SF cluster
The worker roles are easy, but I have been unable to find any good docs or blog posts on the specific's of how to accomplish #'s 1-3 above. Can anyone point me at some concrete docs/blogs on these topics?
If you're coming from Worker Roles, this doc can help get you started: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cloud-services-migration-worker-role-stateless-service/
Specifically to your questions:
WCF web APIs should be possible if you're using WCF self-hosting
MVC is only supported with ASP.NET Core 1 (this is still fairly new, docs are in-progress, in the meantime, here is an example).
Yes, ASP.NET Core 1 allows this if you use WebListener for your web host, which allows you to open listeners either on unique URL paths, or using unique hostnames, all on the same port on a single machine (either in the same process or in multiple processes).
Related
My customer has 2 Windows Server 2019.
On both of them, an instance of a SOAP Web Service is running.
URLs:
https://host1.domainname.com/SOAPService
and
https://host2.domainname.com/SOAPService
Now, the requirement of the customer is to provide a single, unique URL that the clients can use to consume the SOAP WebService(s).
I read through several websites and if I got it right, I need a tool that is called "reserve proxy"... Using this tool, clients can access the webservice by using an URL such as https://host.domainname.com/SOAPService and the tool will automatically route the request to the available webservice.
Correct?
I also have an architectural question:
On which machine do I have to run such a Reserve Proxy?
Is it on host1 or host2 or do I need a dedicated machine (like a supervisor)?
If it is a dediciated machine, how can I apply high availability of this Reverse Proxy? E.g. is it possible to run 2 Reserve Proxies in parallel on different machines? Which tool could afford this?
Thanks
At present we have a lot of ASP.net WebAPI service applications hosted on premises. We are planning to move these to Azure AKS. We've identified a lot of common code across these applications which is mostly implemented as ASP.Net reusable middleware components so that the logic is not duplicated in code.
In a K8s environment it makes sense to offload this common functionality to one or more proxy applications which intercepts the requests being forwarded from the ingress to the services (assuming this is the correct approach). Some of the request inspection / manipulation logic is based on the service host and path to be defined in the ingress and even on the headers in the incoming requests.
For e.g. I considered using OAuth2_proxy but found that even though authentication is quite easy to implement, Azure AD group based authorization is impossible to do out of the box with that. So what's the idiomatic way one goes about setting up such a custom proxy application? (I'm familiar with using libraries such as ProxyKit middleware in ASP.Net to develop http proxies.)
One approach that comes to mind is to deploy such proxies as sidecar containers in each service application pod but that would mean there'd be unnecessary resource usage by all such duplicate container instances in each pod. I don't see the benefit over the use of middleware components as mentioned previously. :(
The ideal setup would be ingress --> custom proxy 1 --> custom proxy 2 --> custom proxy n --> service where custom proxies would be separately deployable and scalable.
So after a lot of reading and googling I found that the solution was to use API Gateways that are available as libraries (preferrably based on .Net):
Ocelot placed behind the nginx ingress fits the bill perfectly
Ocelot is a .NET API Gateway. This project is aimed at people using .NET running a micro services / service oriented architecture that need a unified point of entry into their system. However it will work with anything that speaks HTTP and run on any platform that ASP.NET Core supports.
Ocelot is currently used by Microsoft and Tencent.
The custom middleware and header/query/claims transformation solves my problem. Here are some worthy links
Microsoft Docs: Implement API Gateways with Ocelot
Ocelot on Github
Ocelot Documentation
Features
A quick list of Ocelot's capabilities for more information see the documentation.
Routing
Request Aggregation
Service Discovery with Consul & Eureka
Service Fabric
Kubernetes
WebSockets
Authentication
Authorisation
Rate Limiting
Caching
Retry policies / QoS
Load Balancing
Logging / Tracing / Correlation
Headers / Query String / Claims Transformation
Custom Middleware / Delegating Handlers
Configuration / Administration REST API
Platform / Cloud Agnostic
I have setup 2 noted type cluster on FrontEnd and another BackEnd. The FrontEnd has stateless services and the Backend has Statefull and actor services. Now I have seen examples where they use reverse proxy and http:// calls to communicate with stateful services, and other places where they use Remote calls calling fabric:// When should each be used if there is data intensive transfers happening between Frontend and BackEnd node types which would be better protocol?
Actually fabric:// isn't a protocol itself, it's just a syntax for Service Fabric Naming Service to resolve actual location of your service. Remoting is a better choice if you don't have to expose your service to external clients, since it will choose protocol depending on locations on client and service (may use interprocess communication in case both are located on the same node) while using http:// sticks you to only this protocol.
The fabric:// is just an Uri Scheme. It is used to identify named services, like: fabric://MyApp/MyService
There is no right answer to this question, there are many variables to take into account to select the right approach.
You can use both and it will be absolutely fine.
It far more than that, but a a simple overview I can give is:
Using HTTP communication, the services depends only on each other endpoints, and both can be treated isolated from each other during development and deployment, they will communicate even when you change services versions and tech-stack. You can use different technologies like: Java, GO, NodeJS and still have a smooth communication between your services.
Using Remoting, you might get faster communication, but higher coupling between the services, because both need to understand same interfaces and entities used for communication, to keep them in sync(compatible) will most of the time require to deploy new version of both services together.
.
If performance is not an issue at start, I would suggest go simple with HTTP, and migrate if does not attend your demands.
Would it be a valid approach to host both frontend (asp.net MVC 6) and backend (WCF/WebAPI services) in MS Service Fabric? Fabric is marketed as a platform for running services. Since both MVC and services need to scale, wouldn't it make sense to have both layers in Service Fabric? Not having to deal with hosting the frontend part separatley, let alone scale it, sounds very compelling.
I might be wrong, but I'm not 100% sure you can host an Asp.Net MVC 6 or WCF app inside Service Fabric, but you can certainly host a Web API app. Whatever Asp.Net app you host needs to support OWIN self-hosting, which I'm not totally sure MVC or WCF supports. If you find out that you can host all of those apps, then sure, you should have at it!
I can say that my company's preferred approach is to have a frontend, static application that only serves up static content (HTML, JS) and have that frontend use the Web API we have hosted in Service Fabric. OWIN self-hosting (like what you need to do with Service Fabric), doesn't let you GZip static content without routing it first through a proxy like nginx, and you'll probably want GZip compression with any frontend app. So, you're better off hosting the frontend static application elsewhere, like a traditional Azure Web App that does support GZip compression.
Hope this helps!
I've just created my first Fabric Service app using the latest VS and it allows you create a service as an ASP.Net Core MVC app. Out of the box it uses Kestral as the webserver, but I guess you could use IIS or OWIN as well. If you choose to stick with Kestral, apparently you can add GZip via middleware (GZip middleware), although I haven't tried this myself
After watching the BUILD conference videos for Azure Service Fabric, I'm left imagining how this might be a good fit for our current microservice-based architecture. There is one thing I'm not entirely sure how I would go about solving, however - the API gateway/proxy.
Consider a less-than-trivial microservice architecture where you have N number of services running within the Azure Service Fabric exposing REST endpoints. In many situations, you want to package these fragmented API endpoints up into a single-entry API for consumers to use, to avoid having them connecting to the service fabric-instances directly. The Azure Service Fabric solution seems so complete in every way that I'm sort of wondering if I missed something obvious when I don't see a way to trivially solve this within the capabilities mentioned during the BUILD talks.
Services like Vulcan aim to solve this problem by having the services register the paths they want routed to them in etcd. I'm guessing one way of solving this may be to create a separate stateful web service that other services can register themselves with, providing service name and the paths they need routed to them. The stateful web service can then route traffic to the correct instance based on its state. This doesn't seem entirely ideal, though, with stuff like removing routes when applications are removed and generally keeping the state in sync with the services deployed within the cluster. Has anybody given this any thought, or have any ideas how one might go about solving this within Azure Service Fabric?
The service registration/discoverability you need to do this is actually already there. There's a stateful system service called the Naming Service, which is basically a registrar of service instances and the endpoints they're listening on. So when you start up a service - either stateless or stateful - and open some listener on it, the address gets registered with the Naming Service.
Now the part you'd need to fill in is the "gateway" that users interact with. This doesn't have to be stateful because the Naming Service manages the stateful part. But you'd have to come up with an addressing scheme that works for you, and then it would just forward requests along to the right place. Basically something like this:
Receive request.
Use NS to find the service that can take the request.
Forward the request to it and the response back to the user.
If the service doesn't exist anymore, 404.
In general we don't like to dictate anything about how your services talk to each other, but we are thinking of ways to solve this problem for HTTP as a complete built-in solution.
We implemented a HTTP gateway service for this purpose as well. To make sure we can have one HTTP gateway for any internal protocol, we implemented the gateway for HTTP based internal services (like ASP.NET WebAPIs) using an ASP.NET 5 middleware. It routes requests from e.g /service to an internal Service Fabric address like fabric:/myapp/myservice by using the ServicePartitionClient and some retry logic from CommunicationClientFactoryBase.
We open-sourced this middleware and you can find it here:
https://github.com/c3-ls/ServiceFabric-HttpServiceGateway
There's also some more documentation in the wiki of the project.
This feature is build in for http endpoints, starting with release 5.0 of service fabric. The documentation is available at https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reverseproxy/
We have used an open source project called Traefik with amazing success. There is an Azure Service Fabric wrapper around it - it's essentially a GoLang exe that is deployed onto the cluster as Managed Executable.
It supports circuit breakers, weighted round robin LB, path & header version routing (this is awesome for hosting multiple API versions), the list goes on. And its got a handy portal to view the config and health stats.
The real power in it lies in how you configure it. It's done via the service itself in the ServiceManifest.xml. This allows you to deploy new services and have them immediately able to be routed to - no need to update a routing table etc.
Example
<StatelessServiceType ServiceTypeName="WebServiceType">
<Extensions>
<Extension Name="Traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.frontend.rule.example">PathPrefixStrip: /a/path/to/service</Label>
<Label Key="traefik.enable">true</Label>
<Label Key="traefik.frontend.passHostHeader">true</Label>
</Labels>
</Extension>
</Extensions>
</StatelessServiceType>
Highly recommended!
Azure Service Fabric makes it easy to implement the standard architecture for this scenario: a gateway service as a frontend for the clients to connect to and all the N backend services communicating with the front end gateway. There are a few communication API stacks available as part of Service Fabric that make it easy to communicate from clients to services and within services themselves. The communication API stacks provided by Service Fabric hide the details of discovering, connecting and retrying connections so that you can focus on the actual exchange of information. When using the Service Fabric communication APIs the services do not have to implement the mechanism of registering their names and endpoints to a specific routing service except what are the usual steps as part of creating the service itself. The communication APIs take in the service URI and partition key and automatically resolve and connect to the right service instance. This article provides a good starting point to help make a decision with regards to which communication APIs will be best suited for your particular case depending on whether you are using Reliable Actors or Reliable Services, or protocols such as HTTP or WCF, or the choice of programming language that the services are written in. At the end of the article you will find links to more detailed articles and tutorials for different communication APIs. For a tutorial on communication in Web API services see this.
We are using SF with a gateway pattern and about 13 services behind the gateway. We use the built in DNS service that SF provides, see: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-dnsservice, this allows the internal service to service calls with known (internal to SF) DNS names, including gateway service to internal services. There are some well known asp.net core gateways (Ocelot, ProxyKit) to use, but we rolled our own. We have an external load balancer to route to multiple gateway instances in SF.
When a service is started, it registers it's endpoint with the fabric naming service. Using the Fabric client APIs you can then ask fabric for the registered endpoints, associated with the registered service name.
So yes, just as you described your case, you would have a gateway that would accept an incoming URI for connection, and then use that path information as the service name lookup, to then create a proxy connection between the incoming request and the actual internal endpoint location.
Looks like the team as posted one the samples that shows how to do this: https://github.com/Azure/servicefabric-samples/tree/master/samples/Services/VS2015/WordCount