AKS pod with two services within OSM - kubernetes

We have an application which exposes two ports (for API and WebSocket). Application is deployed in OSM-enabled namespace. We're using nginx-ingress for external access. Currently there are:
two services connected to this pod (one for API and second one for WebSocket)
#api-svc
Type: ClusterIP
IP: [some-ip]
Port: http 80/TCP
TargetPort: 18610/TCP
Endpoints: [some-ip]:18610
-------
#websocket-svc
Type: ClusterIP
IP: [some-ip]
Port: ws 80/TCP
TargetPort: 18622/TCP
Endpoints: [some-ip]:18622
one ingress rule which routes traffic based on path:
paths:
- path: /api
pathType: ImplementationSpecific
backend:
service:
name: api-svc
port:
number: 80
- path: /swiftsockjs
pathType: ImplementationSpecific
backend:
service:
name: websocket-svc
port:
number: 80
one ingressBackend for OSM allowance:
Spec:
Backends:
Name: api-svc
Port:
Number: 18610
Protocol: http
Name: websocket-svc
Port:
Number: 18622
Protocol: http
Sources:
Kind: Service
Name: ingress-nginx-controller
Namespace: ingress
The problem we are facing is that traffic is routed only to one targetPort at a time (i.e. only to 18610 or 18622) regardless the URL path. In the ingress controller logs it’s visible to traffic is routed correctly (/api to 18610 and /swiftsockjs to 18622). The problem is visible in the envoy sidecar logs. Both requests are going to the same upstream_cluster (it should differ by port). This can be seen at line 15th of comparision:
What's the strangest the behavior is changing randomly when service or ingressBackend are redeployed. So one time all requests are forwarded to 18610 and other time to 18622.
We have tried to use multi-port service but according to this OSM PR it's not supported (although results were exactly the same).
Does anyone has any ideas how to fix this? I've read almost whole OSM documentation and MS Docs regarding OMS-Addon but haven't find answer to this problem (or similar example with multiport pod in OSM).

According to Azure support - such a solution is not possible within OSM. Quote:
A restart of the process or the pod sometimes results in the IP:PORT change but also traffic will be consistently forwarded to that IP:PORT.
This appears to be due to the behavior of the proxy. As per OSM github document. It is a 1:1 relationship between the proxy and the endpoint.
It is also a 1:1 relationship between the proxy and the service.
In other words, the proxy will not be able to handle a pod serving multiple services.
Suggestion from MS was to split application logic to separate deployments(pods) so both can server one port at a time.

Related

Kubernetes expose a service on a port over tls

I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
Use sidecar proxy pattern to add HTTPS support to the application running inside the pod.
Refer the below diagram as a reference
Run nginx as a sidecar proxy container fronting the application container inside the same pod. Access the application through port 8888 on nginx proxy. nginx would route the traffic to the application.
Find below the post showing how it can be implemented
https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs
It is not a best practices to expose custom port over internet.
Instead, create a sub-domain (i.e https://custom.myapp.com) which point to internal service in port 8888.
Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain
Example manifest file as follow:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
Hope this helps.
So you have service foo on some port, which you want to have available on your internal network. Then service bar, which runs on port 8888 in that same pod.
It's as simple as setting up two services to that pod, with different spec.ports[].targetPort values. My example assumes a svc foo pointing at port 80, and svc bar pointing at port 8888 on the pod.
Take care that generally, the ingress controller only services HTTP and HTTPS connections on ports 80 and 443. That is a network setting generally defined for the nodes that are running the ingress controller. TCP/UDP are not serviced out-of-the-box by the ingress controller
My advice is to use something like this, and use path to expose the required service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "myapp.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: foo
port:
number: 80
- pathType: Prefix
path: "/bar"
backend:
service:
name: bar
port:
number: 80
If you would want to further secure your network, you should probably take a look at networkpolicies. They allow configuration of granular access to pods and services. You can, for example, only allow external ingress to that pod to port 8888.

Istio - load balance mesh internal HTTP2 traffic to non-standard port

I want to load balance per request a mesh internal HTTP2 traffic coming to my ClusterIP Service over all its available replicas, using Istio; the first iteration is intended to work between two deployments within a single namespace, but I can't quite get there. I need to load balance on a non-standard port, I'm using standard port as a control group.
I was able to configure Istio so that requests from one long-lived connection to the service FQDN to standard port 80 are round robin'd correctly, but long-lived connection to a non-standard port such as 13080 will not round robin, instead a single pod will get all the requests (the behaviour looks like the K8s "iptables random" approach used in Service which only balances per connection, not per request).
Here's my most successful VirtualService definition yet:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
namespace: example
spec:
gateways:
- mesh
hosts:
- "*.example.com"
http:
- match:
- authority:
regex: "(.*.)?pods.example.com(:80)?"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 80
- match:
- authority:
regex: "(.*.)?pods.example.com:13080"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 13080
Ports are defined in the Service like this:
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: http2-nonstd
port: 13080
protocol: TCP
targetPort: 13080
Using Istio 1.6.2. What am I missing?
EDIT: The original question had a typo in the VirtualService definition authority match for the port 13080 - there was exact instead of regex. Nothing changed, however. This supports the hypothesis that for some reason Istio ignores the non-standard port.

gRPC & HTTP servers on GKE Ingress failing healthcheck for gRPC backend

I want to deploy a gRPC + HTTP servers on GKE with HTTP/2 and mutual TLS. My deployment have both a readiness probe and liveness probe with custom path. I expose both the gRPC and HTTP servers via an Ingress.
deployment's probes and exposed ports:
livenessProbe:
failureThreshold: 3
httpGet:
path: /_ah/health
port: 8443
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /_ah/health
port: 8443
scheme: HTTPS
name: grpc-gke
ports:
- containerPort: 8443
protocol: TCP
- containerPort: 50052
protocol: TCP
NodePort service:
apiVersion: v1
kind: Service
metadata:
name: grpc-gke-nodeport
labels:
app: grpc-gke
annotations:
cloud.google.com/app-protocols: '{"grpc":"HTTP2","http":"HTTP2"}'
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2", "http": "HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 50052
protocol: TCP
targetPort: 50052
- name: http
port: 443
protocol: TCP
targetPort: 8443
selector:
app: grpc-gke
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-gke-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
#kubernetes.io/ingress.global-static-ip-name: "grpc-gke-ip"
labels:
app: grpc-gke
spec:
rules:
- http:
paths:
- path: /_ah/*
backend:
serviceName: grpc-gke-nodeport
servicePort: 443
backend:
serviceName: grpc-gke-nodeport
servicePort: 50052
The pod does exist, and has a "green" status, before creating the liveness and readiness probes. I see regular logs on my server that both the /_ah/live and /_ah/ready are called by the kube-probe and the server responds with the 200 response.
I use a Google managed TLS certificate on the load balancer (LB). My HTTP server creates a self-signed certificate -- inspired by this blog.
I create the Ingress after I start seeing the probes' logs. After that it creates an LB with two backends, one for the HTTP and one for the gRPC. The HTTP backend's health checks are OK and the HTTP server is accessible from the Internet. The gRPC backend's health check fails thus the LB does not route the gRPC protocol and I receive the 502 error response.
This is with GKE master 1.12.7-gke.10. I also tried newer 1.13 and older 1.11 masters. The cluster has HTTP load balancing enabled and VPC-native enabled. There are firewall rules to allow access from LB to my pods (I even tried to allow all ports from all IP addresses). Delaying the probes does not help either.
Funny thing is that I deployed nearly the same setup, just the server's Docker image is different, couple of months ago and it is running without any issues. I can even deploy new Docker images of the server and everything is great. I cannot find any difference between these two.
There is one another issue, the Ingress is stuck on the "Creating Ingress" state for days. It never finishes and never sees the LB. The Ingress' LB never has a front-end and I always have to manually add an HTTP/2 front-end with a static IP and Google managed TLS certificate. This should be happening only for cluster which were created without "HTTP load balancing", but it happens in my case every time for all my "HTTP load balancing enabled" clusters. The working deployment is in this state for months already.
Any ideas why the gRPC backend's health check could be failing even though I see logs that the readiness and liveness endpoints are called by kube-probe?
EDIT:
describe svc grpc-gke-nodeport
Name: grpc-gke-nodeport
Namespace: default
Labels: app=grpc-gke
Annotations: cloud.google.com/app-protocols: {"grpc":"HTTP2","http":"HTTP2"}
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/app-protocols":"{\"grpc\":\"HTTP2\",\"http\":\"HTTP2\"}",...
service.alpha.kubernetes.io/app-protocols: {"grpc":"HTTP2", "http": "HTTP2"}
Selector: app=grpc-gke
Type: NodePort
IP: 10.4.8.188
Port: grpc 50052/TCP
TargetPort: 50052/TCP
NodePort: grpc 32148/TCP
Endpoints: 10.0.0.25:50052
Port: http 443/TCP
TargetPort: 8443/TCP
NodePort: http 30863/TCP
Endpoints: 10.0.0.25:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and the health check for the gRPC backend is an HTTP/2 GET using path / on port 32148. Its description is "Default kubernetes L7 Loadbalancing health check." where as the description of the HTTP's back-end health check is "Kubernetes L7 health check generated with readiness probe settings.". Thus the health check for the gRPC back-end is not created from the readiness probe.
Editing the health check to point to port 30863 an changing the path to readiness probe fixes the issue.
GKE ingress just recently started supporting full gRPC support in beta (whereas HTTP2 ro HTTP1.1 conversion was used in the past). To use gRCP though, you need to add an annotation to the ingress "cloud.google.com/app-protocols: '{"http2-service":"HTTP2"}'".
Refer to this how-to doc for more detais.
Editing the health check to point to the readiness probe's path and changed the port to the one of the HTTP back-end fixed this issue (look for the port in the HTTP back-end's health check. it is the NodePort's.). It runs know without any issues.
Using the same health check for the gRPC back-end as for the HTTP back-end did not work, it was reset back to its own health check. Even deleting the gRPC back-end's health check did not help, it was recreated. Only editing it to use a different port and path has helped.

Session Affinity Settings for multiple Pods exposed by a single service

I have a setup Metallb as LB with Nginx Ingress installed on K8S cluster.
I have read about session affinity and its significance but so far I do not have a clear picture.
How can I create a single service exposing multiple pods of the same application?
After creating the single service entry point, how to map the specific client IP to Pod abstracted by the service?
Is there any blog explaining this concept in terms of how the mapping between Client IP and POD is done in kubernetes?
But I do not see Client's IP in the YAML. Then, How is this service going to map the traffic to respective clients to its endpoints? this is the question I have.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
Main concept of Session Affinity is to redirect traffic from one client always to specific node. Please keep in mind that session affinity is a best-effort method and there are scenarios where it will fail due to pod restarts or network errors.
There are two main types of Session Affinity:
1) Based on Client IP
This option works well for scenario where there is only one client per IP. In this method you don't need Ingress/Proxy between K8s services and client.
Client IP should be static, because each time when client will change IP he will be redirected to another pod.
To enable the session affinity in kubernetes, we can add the following to the service definition.
service.spec.sessionAffinity: ClientIP
Because community provided proper manifest to use this method I will not duplicate.
2) Based on Cookies
It works when there are multiple clients from the same IP, because it´s stored at web browser level. This method require Ingress object. Steps to apply this method with more detailed information can be found here under Session affinity based on Cookie section.
Create NGINX controller deployment
Create NGINX service
Create Ingress
Redirect your public DNS name to the NGINX service public/external IP.
About mapping ClientIP and POD, according to Documentation
kube-proxy is responsible for SessionAffinity. One of Kube-Proxy job
is writing to IPtables, more details here so thats how it is
mapped.
Articles which might help with understanding Session Affinity:
https://sookocheff.com/post/kubernetes/building-stateful-services/
https://medium.com/#diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b
follow the service reference for session affinity
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000

How to send data from a container within a Pod, to a container within a separate Pod - within the same cluster?

I have a React frontend and a node.js backend. Each are in separate containers within separate Pods within the same cluster in k8's.
I want to send data between them without having to use IP addresses. I know Kubernetes has a feature that lets you talk between pods inside the same cluster, and i think its related to the selector label defined within the Service files created.
I have created a ClusterIp service for my React app and another ClusterIp for my server. I have created an ingress file for my application. I know my ingress works as i can access my UI, and i can hit my health check endpoint of my server - so i know they are exposed to the outside world correctly. My problem is how to communicate internally within k8's
Within the my react app i have tried to write
axios.post("/api/test", {
value: "TestValue"
});
But the endpoint within my server of api/test never gets hit with this.
Backend Server Cluster IP - - - - 
 
apiVersion: v1
kind: Service
metadata:
name: server-model-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server-model
ports:
- port: 8050
targetPort: 8050
React UI Cluster IP - - - -
apiVersion: v1
kind: Service
metadata:
name: react-ui-cluster-ip-service
spec:
type: ClusterIP
selector:
component: react-ui
ports:
- port: 3000
targetPort: 3000 
Ingress File - - - - -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
backend:
serviceName: react-ui-cluster-ip-service
servicePort: 8050
- path: /server/?(.*)
backend:
serviceName: server-model-cluster-ip-service
servicePort: 8050
I understand the Label selector is what maps my React Cluster IP to the Deployment for my UI and similar for my Server Cluster IP to my Deployment for server. I thin i am right in saying i can use the selector somehow to send axis/http requests to other pods like..
axios.post("/PODNAME/api/test", {
value: "TestValue"
});
Could anyone tell me if i am completely wrong or missing something obvious please :)
Here in this part of ingress service name react-ui-cluster-ip-service is there which is running on port 3000 as you mention in service spec file.
But in you are you are sending traffic to proper service name but the port is wrong one.
- path: /api/?(.*)
backend:
serviceName: react-ui-cluster-ip-service
servicePort: 8050
I think due to this you are not able to send request to /api/?
From your service spec file you can also remove type:clusterIP and you can use an only service name to resolve the services inside kubernetes cluster.
answer for your question title: containers with pod can talk on localhost while container within separtate pod can talk over the service name there no need add the service type as clusterIP & Nodeport
I have few concerns here.
As #Harsh Manvar mentioned in his answer, Kubernetes represents mechanism of discovering internal services which guarantees intercommunication between Pods within the same cluster either by IP address or relevant DNS name of service.Therefore you might be able to reach your backend server from particular frontend Pod without involving Ingress, as Ingress controller stays as an edge router and exposes HTTP and HTTPS network traffic from outside the cluster to the corresponded Kubernetes services.
You also used rewrite expression enclosed to your specific path based routing rules within Ingress object. In that scenario the rewrite seems to be resulted in the following way: server-model-cluster-ip-service/api/test rewrites to server-model-cluster-ip-service/test URI and this should be final path placeholder for you backend service. In fact that you are invoking axios.post("server-model-cluster-ip-service/api/test", { value: "TestValue" }) request from React UI Pod might not hit the target backend service.
I just gave some points to consider how to proceed with further troubleshooting, at least you can log into the frontend Pod and check the connectivity to the target backend service accordingly.