Service Fabric expose same port on different nodes - azure-service-fabric

I have a Service Fabric cluster with 2 nodes.
Each node is exposed to the web with a stateless Asp.net Core 2.0 service.
I'm using HttpSys.
Each node has a unique IP address and the following ports:
First service on Node 1:
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="80" />
<Endpoint Protocol="http" Name="ServiceEndpointSecure" Type="Input" Port="443" />
Second service on Node 2
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="81" />
<Endpoint Protocol="http" Name="ServiceEndpointSecure" Type="Input" Port="444" />
I would like both services to listen to port 80 and 443 but If I change the ServiceManifest I see the error that the port is in use.
How can I make this work being the services on two different nodes with two different IPs and two different DNS names associated to the IPs?

Use a load balancer like Azure Load Balancer to forward the traffic to the 2 ip addresses to your individual services. They can run at any port inside the cluster.
Alternative: Azure API management. can also be used to create rules that direct traffic to specific services.

Related

How to communicate between Backend (microservices) in AWS EKS?

I have 2 node js backend applications, they depend on each other, but I'm confused about how these 2 backends communicate in Kubernetes AWS EKS
In a default k8s setting, creating a service and registering pods into it will create an internal dns address each pod can use to reach another one.
So, if your services are named service-a and service-b, service-a can reach service-b by sending requests to the service-a host.
The FQDN for each service is service-<x>.<namespace>.svc.cluster.local
More information can be found here

Traffic from two ports to one entrypoint?

In Kubernetes I use the Traefik ingress controller. My Kubernetes cluster is Bare Metal.
I have two services listening on ports 8080 and 8082. These two services are tied to one deployment. The deployment has an application that listens on these two ports for different tasks.
Can traffic be routed to these two ports through the same entrypoint, or is this not recommended?
I'm not familiar with kubernetes, so excuse me if I misunderstood the question.
I'm running traefik with a single entry point on port 443 in front of multiple docker-compose services. That's no problem whatsoever. However, traefik needs to know which service the client wants to reach. This is done by specifying different host rules for these services. see

Kubernetes pods are communicating with other pods over load balancer instead of internally

I have a Kubernetes cluster in AWS GovCloud. I have deployed the Kong Ingress Controller (private NLB) along with the kong-proxy service in namespace "kong". In namespace "A" I have deployed my applications along with an Ingress object resource. I have deployed Keycloak (authentication/authorization app), Statuses (custom Ruby on Rails app that returns the statuses of an operation - eg. 10%, 20%, 50%, 100% complete), and Operation A, a custom-built Java app that performs a calculation.
My flow:
Client --> Load Balancer DNS --> kong-proxy --> Ingress --> Keycloak service --> authenticate with Keycloak --> returns auth bearer token console output.
Client (me) passes token --> Load Balancer DNS --> kong-proxy --> Ingress --> Operation A service --> authenticate and initialize Operation A
Operation A service --> sends status update to Statuses service --> connection refused error
When troubleshooting the network flow, I see that Operation A is trying to connect to Statuses via the load balancer's DNS name:
Operation A service --> Load Balancer DNS --> kong-proxy --> Ingress --> Statuses service
But this is a very strange network flow. A pod shouldn't connect to another pod in the same cluster and namespace by going through a load balancer. It should just connect via the K8s internal DNS name: name.namespace.svc.cluster.local:port/path
Is this an issue with the Kong ingress controller or should I be looking at my application config? Can any annotations or parameters be added to the ingress controller or ingress object manifests to correct this network pathing?

GKE 1 load balancer with multiple apps on different assigned ports

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort
GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.
Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

Does API gateways such as Zuul or Ngnix require backend services to be exposed externally as well?

We are trying to figure out a microservice architecture where we have an API Gateway (Zuul in this case), now all the services that Zuul is redirecting requests to would also need to be exposed externally? It seems counter intuitive as all these services can have private/local/cluster access and gateway is the one that should be externally exposed. Is this correct assessment? In what scenarios would you want these backend services to be exposed externally?
-----
|-----
Normally, you would not expose your backend services externally. The gateway (or the ingress) serves as the external gateway and proxies the requests to the internal network.
I am familiar with one use case where I expose some services directly: I do not want to expose some admin services running on my cluster to the external world, but I want to expose them to my VPN, so I have an ingress forwarding traffic between the external network and the cluster, and nodePort services that expose admin apps to my VPN.