Is Service Fabric hybrid on premise and internet exposed solution currently possible? - azure-service-fabric

I have a series of WebApi self hostable services that I need to make available both on-premise and the internet. Currently they are only on-premise, but I was wondering will service fabric allow me to have an on premise cluster and azure hosted cluster connected and handle this hybrid scenario? Can I have a service fabric cluster with nodes both on premise and in azure?
I have it on my backlog to explore leveraging service fabric, but if this scenario was available we would bump up that priority.
Any details on implementing this or even an alternative solution would be greatly appreciated. We tried using Azure App Proxy as well for the internet exposure, but are having problems with the authentication headers going across as we are not using Azure AD.

It's possible to create a cluster that spans multiple locations. Like mentioned this article.
However you should realize that it's not a supported feature. If you make a mistake, loosing one of two locations will result in data loss.
I'd recommend using one cluster.

Related

Azure Service Fabric and Kubernetes communication within same network

I am looking at some strategies how to make bidirectional communication of applications hosted on seperate clusters. Some of them are hosted in Service Fabric and the others are in Kubernetes. One of the options is to use a DNS service on the Service Fabric and the counterpart on Kubernetes. On the other hand the Reverse Proxy seems to be a way to go. After going through the options I was thinking...what is actually the best way to create microservices that can be deployed either in SF or in K8s without worrying about the communication model which requires least changes if we wish suddenly to migrate one app from SF to K8s but still making it avaiable to the SF apps and vice versa?

deploying multiple apps on a single cluster

I am wondering how to deploy multiple applications such as springboot app, nodejs app etc.on a single kubernetes cluster that has a single istio load balancer.
Is it pssible?
I am a beginner in devops so need some guidance on this.
Thank you for suggestions.
Yes, it's possible. Moreover this is the exact purpose of the LoadBalancer - to be a single point of entrance for multiple applications.
If you deploy an example application, you will create three versions of reviews application (reviews-v1, reviews-v2, reviews-v3 - as far as K8s and Istio are concerned, those are three different apps). With the use of Virtual Services and Destination rules, Istio manages traffic between those three applications.
Since you are a beginner, I would strongly recommend thorough read of Istio documentation, especially Tasks and Examples

SF - high availability options?

I have a greenfield application that I would like to host on Service Fabric but high availability will be a key concern. Is there a way for a cluster to span across Azure region’s? Does Service Fabric recognize Availability Zones?
Is there a way for a cluster to span across Azure region’s?
Yes, it's in the FAQ.
The core Service Fabric clustering technology can be used to combine
machines running anywhere in the world, so long as they have network
connectivity to each other. However, building and running such a
cluster can be complicated.
If you are interested in this scenario, we encourage you to get in
contact either through the Service Fabric GitHub Issues List or
through your support representative in order to obtain additional
guidance. The Service Fabric team is working to provide additional
clarity, guidance, and recommendations for this scenario.
Does Service Fabric recognize Availability Zones?
Yes, but limited.
Make sure to check out this example (marked as 'not production ready'). And check out SF Mesh as well, which allows you to deploy to multiple AZ's. (but currently in preview)

Azure Service Fabric - connect to local service fabric cluster from outside the VM it's running on?

We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.

Google Container Engine Subnets

I'm trying to isolate services from one another.
Suppose ops-human has a bunch of mysql stores running on Google Container Engine, and dev-human has a bunch of node apps running on the same cluster. I do NOT want dev-human to be able to access any of ops-human's mysql instances in any way.
Simplest solution: put both of these in separate subnets. How do I do such a thing? I'm open to other implementations as well.
The Kubernetes Network-SIG team has been working on the network isolation issue for a while, and there is an experimental API in kubernetes 1.2 to support this. The basic idea is to provide network policy on a per-namespace basis. A third party network controller can then react to changes to resources and enforces the policy. See the last blog post about this topic for the details.
EDIT: This answer is more about the open-source kubernetes, not for GKE specifically.
The NetworkPolicy resource is now available in GKE in alpha clusters (see latest blog post).