Kubernetes networking confusion on google cloud - kubernetes

I'm fairly new to Kubernetes and I have played around with it for a few days now to get a feeling for it. Trying out to set up an Nginx Ingress controller on the google-cloud platform following this guide, I was able to set everything up as written there - no problems, I got to see the hello-app output.
However, when I tried replicating this in a slightly different way, I encountered a weird behavior that I am not able to resolve. Instead of using the image --image=gcr.io/google-samples/hello-app:1.0 (as done in the tutorial) I wanted to deploy a standard nginx container with a custom index page to see if I understood stuff correctly. As far as I can tell, all the steps should be the same except for the exposed port: While the hello-app exposes port 8080 the standard port for the nginx container is 80. So, naively, I thought exposing (i.e., creating a service) with this altered command should do the trick:
kubectl expose deployment hello-app --port=8080 --target-port=80
where instead of having target-port=8080 as for the hello-app, I put target-port=80. As far as I can tell, all other thins should stay the same, right? In any way, this does not work and when I try to access the page I get a "404 - Not Found" although the container is definitely running and serving the index page (I checked by using port forwarding from the google cloud which apparently directly makes the page accessible for dev purposes). In fact, I also tried several other combinations of ports (although I believe the above one should be the correct one) to no avail. Can anyone explain to my why the routing does not work here?

If you notice the tutorial inside the ingress configuration path: "/hello"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "34.122.88.204.nip.io"
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
you might have updated port number and service name config however if path /hello which means you request is going to Nginx container but not able to file the page hello.html so it's giving you 404.
You hit endpoint IP/hello (Goes to Nginx ingress controller)-->
checked for path /hello and forwarded request to service -->
hello-app (service forwarded request to PODs) --> Nginx POD (it
doesn't have anything at path /hello so 404)
404 written by Nginx side, in your case either it will be Nginx ingress controller or else container(POD) itself.
So try you ingress config without setting path path: "/" and hit the endpoint you might see the output from Nginx.

Related

Kubernetes Ingress not forwarding routes

I am fairly new to Kubernetes and have just deployed my first cluster to IBM Cloud. When I created the cluster, I get a dedicated ingress subdomain, which I will be referring to as <long-k8subdomain>.cloud for the scope of this post. Now, this subdomain works for my app. For example: <long-k8subdomain>.cloud/ping works from my browser/curl just fine- I get the expected JSON response back. But, if I add this subdomain to a CNAME record on my domain provider's DNS settings (I have used Bluehost and IBM Cloud's Internet Services), I get a 404 response back from all routes. However this response is the default nginx 404 response (it says "nginx" under "404 Not Found"). I believe this means that this means the ingress load balancer is being reached, but the request does not get routed right. I am using Kubernetes version 1.20.12_1561 on VPC gen 2 and this is my ingress-config.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Host: <long-k8subdomain>.cloud";
spec:
rules:
- host: <long-k8subdomain>.cloud
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
I am pretty sure this problem is due to the annotations. Maybe I am using the wrong ones or I do not have enough. Ideally, I would like something like this: api..com/ to route correctly. I have also read a little bit about default backends, but I have not dove too much into that just yet. Any help would be greatly appreciated, as I have spent multiple hours trying to fix this.
Some sources I have used:
https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning
https://cloud.ibm.com/docs/containers?topic=containers-ingress-types
https://cloud.ibm.com/docs/containers?topic=containers-comm-ingress-annotations#annotations
Note: The reason why I have the second annotation is because for some reason, requests without that header were not being routed directly. So that was part of my debugging process and I just ended up leaving it as I am not sure if that annotation solves that, so I left it for now.
For the NGINX ingress controller to route requests for your own domain's CNAME record to the service instead of the IBM Cloud one, you need a rule in the ingress where the host identifies your domain.
For instance, if your domain's DNS entry is api.example.com, then change the resource YAML to:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
You should not need the second annotation for this to work.
If you want both of the hosts to work, then you could add a second rule instead of replacing host in the existing one.

GKE Ingres and GCE Load balancer : Always get 404

I try to deploy 2 applications (behind 2 separates Deployments objects). I have 1 Service per Deployment, with type NodePort.
application1_service.yaml
apiVersion: v1
kind: Service
metadata:
name: application1-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: application1
type: NodePort
application2_service.yaml is the exact same (except for name and run)
I use an Ingress to make the 2 services available,
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
networking.gke.io/managed-certificates: "my-certificate"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "my.host.com"
http:
paths:
- path: /*
backend:
serviceName: application1-service
servicePort: 80
- path: /application2/*
backend:
serviceName: application2-service
servicePort: 80
I also create a ManagedCertificate object, to be able to handle HTTPS requests.
managed_certificate.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: my-certificate
spec:
domains:
- my.host.com
The weird thing here is that curl https://my.host.com/ works fine and I can access my service, but when I try curl https://my.host.com/application2/, I keep getting 404 Not Found.
Why is the root working and not the other ?
Additional info:
The ManagedCertificate is valid and works fine with /.
application1 and application2 are the exact same app and if I swap them in the ingress, the output is the same.
Thanks for your help !
EDIT:
Here is the 404 I get when I try to access application2
Don't know if it can help but here is also the part of the Ingress access logs showing the 404
i think you can't use the same port for 2 different applications because this port is used on every node to route to one app.
From docs:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
So in your case one app is already using port 80, you could try to use a different one for application 2
I have replicated the problem without a domain and managed a
Certificate. For me it's working fine using the Load Balancer IP
address.
(ingress-316204)$ curl http://<LB IP address>/v2/
Hello, world!
Version: 2.0.0
Hostname: web2-XXX
(ingress-316204)$ curl http://<LB IP address>/
Hello, world!
Version: 1.0.0
Hostname: web-XXX
Ingress configuration seems to be correct as per doc. Check if curl https://LB IP address/application2/ is working or not, if it's working then there might be some issue with the host name.
Check if you have updated the host file (/etc/hosts) with line LB IP
address my.host.com.
Check if host, path and backend are configured correctly in Load
Balancer configuration.
If still having the problem then check Port,Nodeport and Targetport configured correctly or else share the output of ‘kubectl describe ing my-ingress’ for further investigation.
Answering my own question:
After searching for days, I ultimately found the reason of the problem.
Everything was fine with the cluster and the configs, my problem was from my Flask API.
All the URLs was like this one:
#app.route("/my_function")
So it was working fine on root path with my.host.com/my_function, but when I was typing my.host.com/application1/my_function it wasn't working...
I just changed my app to
#app.route("/application1/my_function")
Everything works fine now :) Hope it will help !

Traefik behind ssl terminating load balancer return 404

I have a K8s setup with traefik being exposed like this
kubernetes:
ingressClass: traefik
service:
nodePorts:
http: 32080
serviceType: NodePort
Behind, I forward some requests to different services
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-name
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-host.com
http:
paths:
- path: /my-first-path
backend:
serviceName: my-nodeJs-services
servicePort: 3000
When the DNS is set directly to resolve to my ip, the application works fine with HTTP
http://my-host.com:32080/my-first-path
But when some one add SSL through AWS ALB / API Gateway, the application fail to be reached with 404-NotFound error
The route is like this
https://my-host.com/my-first-path
On the AWS size, they configured something like this
https://my-host.com => SSL Termination and => Forward all to 43.43.43.43:32080
I think this fail because traefik is expecting http://my-host.com but not https://my-host.com which lead to its failure to find the matching route? Or maybe at the ssl termination time, the hostname is lost so that traefik can not find a route?
What should I do in this situation?
I am not very familiar with ALB but what is probably happening is that the requests received by the loadbalancer contain the header Host: my-host.com and when it gets forwarded to your ingress controller, the header is replaced by Host: 43.43.43.43. If this is the case, I see 3 solutions:
ALB might be able to pass the original Host header to the target. (You will have to check in the doc if it's possible)
If the application behind your ingress doesn't check the host header, you can write an ingress that doesn't check a specific host. For example on these examples you can see that the host field is not specified.
If the name resolution works internally, you can define a name for your target, use this name in your ALB and in your ingress.

kubernetes ingress configuration

I have a working Nexus 3 pod, reachable on port 30080 (with NodePort): http://nexus.mydomain:30080/ works perfectly from all hosts (from the cluster or outside).
Now I'm trying to make it accessible at the port 80 (for obvious reasons).
Following the docs, I've implemented it like that (trivial):
[...]
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nexus.mydomain
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: nexus-service
servicePort: 80
Applying it works without errors. But when I try to reach http://nexus.mydomain, I get:
Service Unavailable
No logs are shown (the webapp is not hit).
What did I miss ?
K3s Lightweight Kubernetes
K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.
As I mentioned in comments, K3s as default is using Traefik Ingress Controller.
Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
This information can be found in K3s Rancher Documentation.
Traefik is deployed by default when starting the server... To prevent k3s from using or overwriting the modified version, deploy k3s with --no-deploy traefik and store the modified copy in the k3s/server/manifests directory. For more information, refer to the official Traefik for Helm Configuration Parameters.
To disable it, start each server with the --disable traefik option.
If you want to deploy Nginx Ingress controller, you can check guide How to use NGINX ingress controller in K3s.
As you are using specific Nginx Ingress like nginx.ingress.kubernetes.io/rewrite-target: /$1, you have to use Nginx Ingress.
If you would use more than 2 Ingress controllers you will need to force using nginx ingress by annotation.
annotations:
kubernetes.io/ingress.class: "nginx"
If mention information won't help, please provide more details like your Deployment, Service.
I do not think you can expose it on port 80 or 443 over a NodePort service or at least it is not recommended.
In this configuration, the NGINX container remains isolated from the
host network. As a result, it can safely bind to any port, including
the standard HTTP ports 80 and 443. However, due to the container
namespace isolation, a client located outside the cluster network
(e.g. on the public internet) is not able to access Ingress hosts
directly on ports 80 and 443. Instead, the external client must append
the NodePort allocated to the ingress-nginx Service to HTTP requests.
-- Bare-metal considerations - NGINX Ingress Controller
* Emphasis added by me.
While it may sound tempting to reconfigure the NodePort range using
the --service-node-port-range API server flag to include unprivileged
ports and be able to expose ports 80 and 443, doing so may result in
unexpected issues including (but not limited to) the use of ports
otherwise reserved to system daemons and the necessity to grant
kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches
proposed in this page for alternatives.
-- Bare-metal considerations - NGINX Ingress Controller
I did a similar setup a couple of months ago. I installed a MetalLB load balancer and then exposed the service. Depending on your provider (e.g., GKE), a load balancer can even be automatically spun up. So possibly you don't even have to deal with MetalLB, although MetalLB is not hard to setup and works great.

Define a fallback service for an isomorphic JavaScript app

I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.
Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.
What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.
Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?
The easiest way to setup NGINX Ingress to meet your needs is by using the default-backend annotation.
This annotation is of the form
nginx.ingress.kubernetes.io/default-backend: <svc name> to specify
a custom default backend. This <svc name> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the custom-http-errors
annotation is set.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend.
You can find more details on this example here.