how to add another listening port to the nginx ingress controller kubernetes? - kubernetes

by default, nginx ingress listens to two ports 80 and 443, how to add listening on port 9898
I tried to change it in daemon set, but nothing came out
, I don 't even know where else to dig

I'm not sure what will work exactly for you, but here's a few things you can try to approach this (read carefully because nginx is confusing):
Define service for your deployment, and make sure it covers port routes you want and support on deployment end:
apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: web
labels:
app: web-app
spec:
ports:
- port: 80
targetPort: 1337
protocol: TCP
selector:
app: web-app
Refer to it in nginx ingress:
rules:
- host: mycoolwebapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
The catch here is that you can route ALL services via port 80 but use any target port you want, so that you can, say, add 50 ingress hosts/routes over a morning routing to port 80 and only difference they'll have is target port in service.
3. If you are specifically unhappy with ports 80 and 443, you are welcome to edit ingress-nginx-controller (service one, because as I said nginx is confusing).
4. Alternatively, you can find example of ingress-nginx-controller service on the web, customize it and apply, then connect ingress to it... but I advise against this because if nginx doesn't like anything you set up as custom service, it's easier to just reinstall whole helm release of it and try again.

Related

Unable to open Istio ingress-gateway for gRPC

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.
On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.
On the client side: I have a Go client which can send gRPC requests to the service.
It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.
Logs:
I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.
Istio's Ingress-Gateway
kubectl describe service -n istio-system istio-ingressgateway
outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)
Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?
Here is the relevant yaml:
Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
Re checking DNS works on the client:
getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
outputs 3 IP's, I'm assuming that's good.
[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
Checking Istio Ingressgateway's routes:
istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
returns
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
Port 8081 is routed to myservice.mynamespace, seems good to me.
UPDATE 1:
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.
As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by #A_Suh, #Ryota and #peppered.
Kubectl edit
Helm
Istio Operator
Additional resources:
How to create custom istio ingress gateway controller?
How to configure ingress gateway in istio?
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I see you have already create new question here, so let's just move there.
I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.
Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

Problem configuring websphere application server behind ingress

I am running websphere application server deployment and service (type LoadBalancer). The websphere admin console works fine at URL https://svcloadbalancerip:9043/ibm/console/logon.jsp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
was-svc LoadBalancer x.x.x.x x.x.x.x 9080:30810/TCP,9443:30095/TCP,9043:31902/TCP,7777:32123/TCP,31199:30225/TCP,8880:31027/TCP,9100:30936/TCP,9403:32371/TCP 2d5h
But if i configure that websphere service behind ingress using ingress file like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /ibm/console/logon.jsp
backend:
serviceName: was-svc
servicePort: 9043
- path: /v1
backend:
serviceName: web
servicePort: 8080
The url https://ingressip//ibm/console/logon.jsp doesn't works.
I have tried the rewrite annotation too.
Can anyone help to just deploy the ibmcom/websphere-traditional docker image in kubernetes using deployment and service. With the service mapped behind the ingress and the websphere admin console should somehow be opened from ingress
There is a helm chart available from IBM team which has the ingress resource as well. In your code snippet, you are missing SSL related annotations as well.
https://hub.helm.sh/charts/ibm-charts/ibm-websphere-traditional
https://github.com/IBM/charts/tree/master/stable/ibm-websphere-traditional
I have added the Virtual Host configuration for admin console to work with port 443 in the following code sample.
Please Note: Exposing admin console on the ingress is not a good practice. Configuration should be done via wsadmin or by extending the base Dockerfile. Any changes done through the console will be lost when the container restarts.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: websphere
spec:
type: NodePort
ports:
- name: admin
port: 9043
protocol: TCP
targetPort: 9043
nodePort: 30510
- name: app
port: 9443
protocol: TCP
targetPort: 9443
nodePort: 30511
selector:
run: websphere
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: websphere-admin-vh
namespace: default
data:
ingress_vh.props: |+
#
# Header
#
ResourceType=VirtualHost
ImplementingResourceType=VirtualHost
ResourceId=Cell=!{cellName}:VirtualHost=admin_host
AttributeInfo=aliases(port,hostname)
#
#
#Properties
#
443=*
EnvironmentVariablesSection
#
#
#Environment Variables
cellName=DefaultCell01
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: websphere
name: websphere
spec:
containers:
- image: ibmcom/websphere-traditional
name: websphere
volumeMounts:
- name: admin-vh
mountPath: /etc/websphere/
ports:
- name: app
containerPort: 9443
- name: admin
containerPort: 9043
volumes:
- name: admin-vh
configMap:
name: websphere-admin-vh
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: /ibm/console
backend:
serviceName: websphere
servicePort: 9043
Exposing both the adminhost and defaulthost via ingress isn't possible, or at least I've never figured out how to accomplish it. The crux of the issue is that ingress listens on port 80 or port 443 and forwards your request to the corresponding port on the container. Thus, the Host header of your request contains that port. I don't know enough about WAS channels/virtualhosts to understand how this works exactly, but in order for accessing WAS endpoints over any port other than the one listed for the endpoint in WAS config to work, the websphere-traditional image has to set a property to extract the port it should use for things like checking against virtualhost hostalias entries and issuing redirects from the Host header (com.ibm.ws.webcontainer.extractHostHeaderPort).
The problem becomes, when it uses that port, that port needs to be listed as a host alias for the virtual host in order for the traffic to be let through to the application. And since a combination of wildcard host and specific port can only be a host alias on one virtual host at a time, they were set up as host aliases on defaulthost so that web applications will work via ingress, but this makes it impossible to also access the admin console since that is served via a separate virtualhost which doesn't (and as far as I know can't) have the host alias entries set up to allow traffic with port 443 in its host header through. I haven't had to figure out how to get this working because kubectl port-forward has been sufficient to get at the admin console for the times I've needed to consult something, and you can't make changes anyway because they'll disappear when the pod restarts and a new one is started from the same (unchanged) image.

Kubernetes nginx ingress session affinity across ports

I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
Then I'm using a single ingress to route traffic to the correct service port based on path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
rules:
- host: myapp.test.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /app
- backend:
serviceName: app
servicePort: 8081
path: /service
This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.
Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.
you can do this by setting the service level settings to:
service.spec.externalTrafficPolicy: Local
you can read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.
keeping everything else the same and having this service definition should work
apiVersion: v1
kind: Service
metadata:
name: app
spec:
externalTrafficPolicy: Local
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081

Ingress cannot resolve NodePort IP in GKE

I have an ingress defined as:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: zaz-address
kubernetes.io/ingress.allow-http: "false"
ingress.gcp.kubernetes.io/pre-shared-cert: foo-bar-com
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /zaz/*
backend:
serviceName: zaz-service
servicePort: 8080
Then the service zap-service is a nodeport defined as:
apiVersion: v1
kind: Service
metadata:
name: zaz-service
namespace: default
spec:
clusterIP: 10.27.255.88
externalTrafficPolicy: Cluster
ports:
- nodePort: 32455
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: zap
sessionAffinity: None
type: NodePort
The nodeport is successfully selecting the two pods behind it serving my service. I can see in the GKE services list that the nodeport has an IP that looks internal.
When I check in the same interface the ingress, it also looks all fine, but serving zero pods.
When I describe the ingress on the other hand I can see:
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/zaz/* zaz-service:8080 (<none>)
Which looks like the ingress is unable to resolve the service IP. What am I doing wrong here? I cannot access the service through the external domain name, I am getting an error 404.
How can I make the ingress translate the domain name zaz-service into the proper IP so it can redirect traffic there?
Seems like the wildcards in the path are not supported yet.
Any reason why not using just the following in your case?
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /zaz
backend:
serviceName: zaz-service
servicePort: 8080
My mistake was, as expected, not reading the documentation thoroughly.
The port stated in the Ingress path is not a "forwarding" mechanism but a "filtering" one. In my head it made sense that it would be redirecting http(s) traffic to port 8080, which is the one where the Service behind was listening to, and the Pod behind the service too.
Reality was that it would not route traffic which was not port 8080 to the service. To make it cleaner I changed the port in the Ingress from 8080 to 80 and in the Service the front-facing port from 8080 to 80 too.
Now all requests coming from the internet can reach the server successfully.

Loadbalancing with reserved IPs in Google Container Engine

I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site.
How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses?
I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet:
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.org
http:
paths:
- backend:
serviceName: example-web
servicePort: 80
path: /
I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- port: 80
targetPort: 80
loadBalancerIP: "10.10.10.10"
type: LoadBalancer
I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer.
This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong?
As you are running in google-container-engine you could set up a Compute Engine HTTP Load Balancer for your static IP. The Target proxy will add X-Forwarded- headers for you.
Set up your kubernetes service with type NodePort and add a nodePort field. This way nodePort is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- nodePort: 30080
port: 80
targetPort: 80
type: NodePort
Create a backend service with HTTP health check on port 30080 for your instance group (nodes).