Service Fabric Stateless Service Deployment on Individual Nodes - deployment

I want to deploy multiple instances of my Service Fabric Stateless Service (Background Service) on individual nodes in the cluster.
So basically if I have 10 nodes in the cluster I want 10 application instances of stateless service to be deployed on individual nodes.
The deployment is creating 2 - 3 applications on single node and some of the nodes are unutilized.
Is there a way to bind the application to individual nodes?

In ApplicationManifest file you can specify Value to -1 in instance count, service fabric runtime will deploy one instance of service in every node. This value can be passed via parameter files.

Related

Can you make a kubernetes container deployment conditional on whether a configmap variable is set?

If I have a k8s deployment file for a service with multiple containers like api and worker1, can I make it so that there is a configmap with a variable worker1_enabled, such that if my service is restarted, container worker1 only runs if worker1_enabled=true in the configmap?
The short answer is No.
According to k8s docs, Pods in a Kubernetes cluster are used in two main ways:
Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly.
Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
Unless your application requires it, it is better to separate the worker and api containers into their own pod. So you may have one deployment for worker and one for api.
As for deploying worker when worker1_enabled=true, that can be done with helm. You have to create a chart such that when the value of worker1_enabled=true is set, worker is deployed.
Last note, a service in kubernetes is an abstract way to expose an application running on a set of Pods as a network service.

How can I deploy one entry point for applications running cross Kubernete clusters?

I have two K8S clusters setup, one on AWS EKS, the other is on GCP. I setup a rancher server which is used to manage this two clusters. I have an application (appA) which is packaged in a helm chart. The application is just a rest api server created by nodejs + express.
It is deployed to both clusters via Rancher. After deploy, appA are running in the two clusters separately.
I have another application (appB) (running outside of K8S) which is listening on a database stream and it needs to call appA (in the K8S clusters) to process the change events.
My question is how I can deploy an entry point, like nginx, which will forward the appB's requests to appA, one of the pod from the clusters should serve this request.
You can expose the appA Kubernetes service type as Loadbalancer.
You can run nginx outside of the k8s, create a upstream and add both EKS
and GKE loadbalancers urls as backend servers.
Follow the below link
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

Architecture Question - User Driven Resource Allocation

I am working on a SaaS application built on Azure AKS. Users will connect to a web frontend, and depending on their selection, will deploy containers on demand for their respective Organization. There will be a backend API layer that will connect to the Kubernetes API for the deployment of different YAML configurations.
Users will select a predefined container (NodeJs container app), and behind the scenes that container will be created from a template and a URL provided to the user to consume that REST API resource via common HTTP verbs.
I read the following blurb on the Kubernetes docs:
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller), the new Pod is scheduled to run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails.
I am thinking that that each "organization account" in my application should deploy containers that are allocated a shared context constrained to a Pod, with multiple containers spun up for each "resource" request. This is because, arguably, an Organization would prefer that their "services" were unique to their Organization and not shared with the scope of others. Assume that namespace, service, or pod name is not a concern as each will be named on the backend with a GUID or similar unique identifier.
Questions:
Is this an appropriate use of Pods and Services in Kubernetes?
Will scaling out mean that I add nodes to the cluster to support the
maximum constraint of 110 Pods / node?
Should I isolate these data services / pods from the front-end to its own dedicated cluster, then add a cluster when (if) maximum Node count of 15,000 is reached?
I guess you should have a look at Deployments
A container is in a pod.
A pod is in a deployment
A service exposes a deployment.

how to use load balancer route traffic to REST based stateless services which not spread on all nodes of a given node type

say, I have a service fabric cluster with one node type which has 5 nodes, and deployed two stateless services on those 5 nodes, but I would deploy service_1 on 3 nodes and service_2 on 2 other nodes.
I understand that a node type is actually a VMSS and azure will create a load balancer on top of that VMSS. It works perfectly on REST stateless service which spread to all of nodes of certain node type.
But in my case, the service deployed on part of nodes, can I still leverage the load balancer to route to instances of two separate services?
You have multiple options:
Use the built in HTTP gateway service from Service Fabric. Unfortunately, I haven't found any documentation about this yet, so I don't know about the advantages and disadvantages of this solution. See this comment in the service-fabric-issues project for some details.
Implement your own stateless gateway service which has an InstanceCount of -1 (which means it gets placed on every node). This service will act like an internal load balancer and route every request to the correct service. See weidazhao's (a Microsoft employee - this project might become part of the SDK) or our project for existing gateway projects.
Keep using one load balancer which points to the whole scale set and use probe methods to let the load balancer disable nodes on which the service is not running. However, in case of placement changes, this results in failing requests until the load balancer figures it out.
Create separate node types (VM Scale Sets) for every service and create a separate load balancer for each node type. However, this results in management overhead and might not be ideal in terms of resource usage.
There is an open feature suggestion for this topic on the Azure UserVoice site - your votes would be welcome.

Azure Service Fabric nodes, node types, instances and scale sets

After experimenting with Azure's Service fabric for a few days I still feel uncomfortable with the following four key words:
* instance
* node
* node type
* scale set.
What do they mean? What are the differences?
Instance: Depends on the context - it could mean a VM, an instance of a service, etc.
Node: A node within the cluster - in an Azure deployment right now that would mean a VM, but if you're running the dev environment on your box, then a node is really a set of processes.
Node type: Defines the size and other properties of a VM type. Each node type in a cluster has to be a separate VM scale set.
Scale set: A set of VMs managed as one.
Some useful resources:
https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-cluster-nodetypes/
https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-cluster-creation-via-portal/
A SF cluster consists of a group or a ring of VMs (sometimes called "nodes") that talk or knows each other which is taken care of for you by the SF framework(consider SF as platform as a service)
A SF app consists of microservices. So your solution structure will contain :
The SF App project which contains the application manifests, deploy scripts
Microservices projects (can be an actors, stateful or stateless services)
When the SF app gets deployed, those microservices will get installed in the VMs. So you now have an "instance" of those microservices. If you have 5 VMs in the cluster, in the case of stateless microservices, those will be deployed to those 5 VMs.
For stateful microservices, one VM will be elected as the primary and two will be assigned as secondary.