I'm doing SSL termination using Ingress for HTTPS traffic. But I also want to achieve the same thing for Custom Port (http virtual host). For example https://example.com:1234 should go to http://example.com:1234
Nginx Ingress has a ConfigMap where we can expose custom ports. But SSL termination doesn't work here.
Any work around? I wonder If I could redirect the incoming https using .htaccess instead.
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-services/httpd:1234"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
SSL Termination for TCP traffic is not a feature directly supported by nginx-ingress.
It is more widely described in this Github issue:
Github.com: Kubernetes: Ingress-nginx: Issues: [nginx] Support SSL for TCP
You can also find in this thread that some people were successful in implementing a workaround allowing them to support terminating SSL with TCP services. Specifically:
Github.com: Kubernetes: Ingress-nginx: Issues: [nginx] Support SSL for TCP: Comment 749026036
As your example featured the "downgrade" from HTTPS communication to HTTP it could be beneficiary to add that you can alter the way that NGINX Ingress Controller connects to your backend. Let me elaborate on that.
Please consider this as a workaround:
By default your NGINX Ingress Controller will connect to your backend with HTTP. This can be changed with following annotation:
nginx.ingress.kubernetes.io/backend-protocol:
Citing the official documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.
-- Kubernetes.github.io: Ingress-nginx: User guide: Nginx configuration: Annotations: Backend protocol
In this particular example the request path will be following:
client -- (HTTPS:443) --> Ingress controller (TLS Termination) -- (HTTP:service-port) --> Service ----> Pod
The caveat
You can use the Service of type LoadBalancer to send the traffic from port 1234 to either 80/443 of your Ingress Controller. This would make TLS termination much easier but it would force the client to use only one protocol. For example:
- name: custom
port: 1234
protocol: TCP
targetPort: 443
This excerpt from nginx-ingress Service could be used to forward the HTTPS traffic to your Ingress Controller where the request would be TLS terminated and forwarded as HTTP to your backend. Forcing the HTTP through that port would yield error code 400: Bad request.
In this particular example the request path will be following:
client -- (HTTPS:1234) --> Ingress controller (TLS Termination) -- (HTTP:service-port) --> Service ----> Pod
Related
I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.
The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.
So are my configurations:
FrontendConfig.yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontendconfig
namespace: my-namespace
spec:
redirectToHttps:
enabled: false
sslPolicy: my-ssl-policy
BackendConfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 60
timeoutSec: 5
healthyThreshold: 3
unhealthyThreshold: 5
type: HTTP
requestPath: /health
# The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
port: 8001
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
# Frontend Configuration Name
networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
# Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
tls:
- secretName: my-secret
defaultBackend:
service:
name: my-service
port:
number: 443
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
# Specify the type of traffic accepted
cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
# Specify the BackendConfig to be used for the exposed ports
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
# Enables the Cloud Native Load Balancer
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: my-application
ports:
- protocol: TCP
name: service-port
port: 443
targetPort: app-port # this port expects TLS traffic, no http plain connections
The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.
The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.
Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?
There are few elements to your question, i'll try to answer them here.
I don't want to expose the healthcheck port outside my cluster
The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.
The traffic is routed exactly as it is to the HTTPS port of the service/application
The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.
Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway
Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP
I don't see any link between the service and Ingress yaml files. How is it linked and how does it work? I looked at the nginx ingress controller but couldn't find any links to the ingress either.
How does the traffic flow? LB -> Ingress controller -> Ingress -> Backend service -> pods? And it seems only 80 and 443 are allowed by ingress. Does that mean any custom ports defined on ingress-nginx service is directly connected to the pod through like LB -> Backend service -> Pod?
Update: Figured out the traffic flow. Its as follows:
LB -> Ingress controller -> Ingress -> Backend service -> pods
I have a https virtual host with a custom port and I guess I need to edit the ingress-controller yaml file to allow custom port and add the custom port to ingress and would it start routing?
Ingress.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
rules:
- path: /
backend:
serviceName: httpd
servicePort: 443
cloud-generic-service.yml:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
1235: "test-web-dev/tomcat7:1235"
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-1235
port: 1235
protocol: TCP
targetPort: 1235
Explanation to this can be found in the documentation
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable
URLs, load balance traffic, terminate SSL / TLS, and offer name-based
virtual hosting.
So Ingress routes traffic from outside the cluster to service that you've specified in it, httpd in your example. You can specify how traffic should be used by adding annotations (example of annotation for nginx ingress).
The Ingress controller is an application that runs in a cluster and
configures an HTTP load balancer according to Ingress resources. The
load balancer can be a software load balancer running in the cluster
or a hardware or cloud load balancer running externally. Different
load balancers require different Ingress controller implementations.
In the case of NGINX, the Ingress controller is deployed in a pod along with the > load balancer.
Ingress resources requires Ingress controller to be present in the cluster. It is not deployed in to the cluster by default that's why it has has to be installed manually.
This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.
On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.
On the client side: I have a Go client which can send gRPC requests to the service.
It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.
Logs:
I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.
Istio's Ingress-Gateway
kubectl describe service -n istio-system istio-ingressgateway
outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)
Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?
Here is the relevant yaml:
Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
Re checking DNS works on the client:
getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
outputs 3 IP's, I'm assuming that's good.
[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
Checking Istio Ingressgateway's routes:
istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
returns
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
Port 8081 is routed to myservice.mynamespace, seems good to me.
UPDATE 1:
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.
As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by #A_Suh, #Ryota and #peppered.
Kubectl edit
Helm
Istio Operator
Additional resources:
How to create custom istio ingress gateway controller?
How to configure ingress gateway in istio?
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I see you have already create new question here, so let's just move there.
I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.
Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Here is the config for our current EKS service:
apiVersion: v1
kind: Service
metadata:
labels:
app: main-api
name: main-api-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Cluster
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: main-api
sessionAffinity: None
type: LoadBalancer
is there a way to configure it to use HTTPS instead of HTTP?
To terminate HTTPS traffic on Amazon Elastic Kubernetes Service and pass it to a backend:
1. Request a public ACM certificate for your custom domain.
2. Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.
3. In your text editor, create a service.yaml manifest file based on the following example. Then, edit the annotations to provide the ACM ARN from step 2.
apiVersion: v1
kind: Service
metadata:
name: echo-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
4. To create a Service object, run the following command:
$ kubectl create -f service.yaml
5. To return the DNS URL of the service of type LoadBalancer, run the following command:
$ kubectl get service
Note: If you have many active services running in your cluster, be sure to get the URL of the right service of type LoadBalancer from the command output.
6. Open the Amazon EC2 console, and then choose Load Balancers.
7. Select your load balancer, and then choose Listeners.
8. For Listener ID, confirm that your load balancer port is set to 443.
9. For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.
10. Associate your custom domain name with your load balancer name.
11. Finally, In a web browser, test your custom domain with the following HTTPS protocol:
https://yourdomain.com
You should use an ingress (and not a service) to expose http/s outside of the cluster
I suggest using the ALB Ingress Controller
There is a complete walkthrough here
and you can see how to setup TLS/SSL here
I have configured istio ingress with lets encrypt certificate.
I am able to access different service on https which are running on different port by using gateways and virtualservice.
But kubernetes-dashboard run on 443 port in kube-system namespace and with its own certificate, How i can expose it through istio gateways and virtualservice.
I have defined sub domain for dashboard and created gateways,virtualservice and it was directing 443 trafic to kuberentes dashboard service , but its not working.
for https virtual service config i have taken reference from for istio doc
It sounds like you want to configure an ingress gateway to perform SNI passthrough instead of TLS termination. You can do this by setting the tls mode in your Gateway configuration to PASSTHROUGH something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dashboard
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https-dashboard
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- dashboard.example.com
A complete passthrough example can be found here.