Loadbalancing with reserved IPs in Google Container Engine - kubernetes

I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site.
How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses?
I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet:
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.org
http:
paths:
- backend:
serviceName: example-web
servicePort: 80
path: /
I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- port: 80
targetPort: 80
loadBalancerIP: "10.10.10.10"
type: LoadBalancer
I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer.
This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong?

As you are running in google-container-engine you could set up a Compute Engine HTTP Load Balancer for your static IP. The Target proxy will add X-Forwarded- headers for you.
Set up your kubernetes service with type NodePort and add a nodePort field. This way nodePort is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- nodePort: 30080
port: 80
targetPort: 80
type: NodePort
Create a backend service with HTTP health check on port 30080 for your instance group (nodes).

Related

Microk8s reaches internet but not internal network

I've just installed Ubuntu 22.04 on a vmware virtual server and started using microk8s. The server is part of a local network in which there are some servers, including microsoft AD and IIS servers that handle the network.
I've docker installed on the ubuntu system and can run all the containers of the web app with no problem via docker. In particular, I have a service (a container) that connects to the windows AD server of the local network to authenticate users of the web app. On the host, it works with no problem, can reach the AD server and also other servers in the network and do all the necessary operations.
On the other hand, when run on kubernetes via microk8s, all the services work, they are all reachable from the local network, while at the same time the containers can reach the external network (outside our local network, e.g. www.google.com). Only the internal network seems to be unreachable, for which I always get a timeout error.
What I tried (but did not work)
External service
[https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors][1]
Check the dns resolution on the host that gets copied into the container
Note
I'm not sure what kind of commands shall be run in order to provide the most useful information about the configuration, so I'll be iterating over this question, extending it with logs and other meaningful information.
Thanks
Edit 11/10/2022
I've enable the following addons
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metallb # (core) Loadbalancer for your Kubernetes cluster
Another strange thing, is that the containers can access the postgres database on the host via the host's ip address (10.1.1.xxx)
Edit 2 12/10/2022
Here's the ingress yaml file
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
---
#
# Ingress
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /api/erp(/|$)(.*)
pathType: Prefix
backend:
service:
name: erp-service
port:
number: 8000
- path: /api/auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 8000
- path: /()(.*)
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 3000
I can access the UI and by using the host's ip and /api/auth, I can access the online documentation of swagger/openapi.
[1]: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
To this day I haven't managed to find any solution but to circumvent the request and use a "proxy" endpoint as suggested in
Accessing an external InfluxDb Database from a microk8s pod using selectorless service and manual endpoint?
Basically, it creates a service with that can be accessed by the cluster and an endpoint that points to the external resource.
Actual source config taken from the answer
kind: Service
apiVersion: v1
metadata:
name: influxdb-service-lb
#namespace: ingress
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.61
# selector:
# app: grafana
ports:
- name: http
protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service-lb
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- name: influx
protocol: TCP
port: 8086
If I'll manage to find a solution, I'll update this answer

Unable to open Istio ingress-gateway for gRPC

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.
On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.
On the client side: I have a Go client which can send gRPC requests to the service.
It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.
Logs:
I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.
Istio's Ingress-Gateway
kubectl describe service -n istio-system istio-ingressgateway
outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)
Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?
Here is the relevant yaml:
Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
Re checking DNS works on the client:
getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
outputs 3 IP's, I'm assuming that's good.
[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
Checking Istio Ingressgateway's routes:
istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
returns
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
Port 8081 is routed to myservice.mynamespace, seems good to me.
UPDATE 1:
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.
As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by #A_Suh, #Ryota and #peppered.
Kubectl edit
Helm
Istio Operator
Additional resources:
How to create custom istio ingress gateway controller?
How to configure ingress gateway in istio?
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I see you have already create new question here, so let's just move there.
I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.
Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

K8s service LB to external services w/ nginx-ingress controller

Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.

Is there a way to configure an EKS service to use HTTPS?

Here is the config for our current EKS service:
apiVersion: v1
kind: Service
metadata:
labels:
app: main-api
name: main-api-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Cluster
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: main-api
sessionAffinity: None
type: LoadBalancer
is there a way to configure it to use HTTPS instead of HTTP?
To terminate HTTPS traffic on Amazon Elastic Kubernetes Service and pass it to a backend:
1.    Request a public ACM certificate for your custom domain.
2.    Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.
3.    In your text editor, create a service.yaml manifest file based on the following example. Then, edit the annotations to provide the ACM ARN from step 2.
apiVersion: v1
kind: Service
metadata:
name: echo-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
4.    To create a Service object, run the following command:
$ kubectl create -f service.yaml
5.    To return the DNS URL of the service of type LoadBalancer, run the following command:
$ kubectl get service
Note: If you have many active services running in your cluster, be sure to get the URL of the right service of type LoadBalancer from the command output.
6.    Open the Amazon EC2 console, and then choose Load Balancers.
7.    Select your load balancer, and then choose Listeners.
8.    For Listener ID, confirm that your load balancer port is set to 443.
9.    For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.
10.    Associate your custom domain name with your load balancer name.
11.    Finally, In a web browser, test your custom domain with the following HTTPS protocol:
https://yourdomain.com
You should use an ingress (and not a service) to expose http/s outside of the cluster
I suggest using the ALB Ingress Controller
There is a complete walkthrough here
and you can see how to setup TLS/SSL here

Setting up internal service on GKE without external IP

I am new to GKE and kubernetes. I installed elastic search on GKE using Google Click to Deploy. I also installed nginx-ingress and secured the elasticsearch service with HTTP basic authentication (through the ingress). I created an external static IP and assigned it to the ingress controller using the loadBalancerIp field in the ingress-controller service configuration.
Questions:
I have appengine services running in GCP which need to access this elasticsearch setup. Can I avoid exposing my elasticsearch service outside - with some kind of an "internal" IP which only my appengine services can access? Is using VPC one of the ways of doing this?
I see that my ingress was also assigned an external IP address (the static IP I created was assigned to the nginx-ingress-controller service). However, when I hit this IP on port 80, I get connection refused and on 9200 port, it times out. Can I avoid having two external IPs? How secure is this ingress IP address? What are its open ports?
Here is my ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required - ok
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: basic-ingress
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: elasticsearch-1-elasticsearch-svc
servicePort: 9200
path: /
Here is the ingress controller service configuration:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.15
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
spec:
clusterIP: <Some IP>
externalTrafficPolicy: Cluster
loadBalancerIP: <External IP>
ports:
- name: http
nodePort: 30290
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30119
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
My suggestion is to use 2 load balancer, 1 for public and 1 for private. To create private load balancer you just need to add following line in metadata section
cloud.google.com/load-balancer-type: "Internal"
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing