Correct way to expose ingress service using baremetal Kubernetes Cluster - kubernetes

I have the following topology in my kubernetes cluster:
So, I have 2 Nodes: 1 Master and 1 Worker Node.
Now I created an application with my deployment.yml and my service.yml, using nodePort configuration, see:
apiVersion: v1
kind: Service
metadata:
name: administrativo-service
spec:
type: NodePort
selector:
app: administrativo
ports:
- protocol: TCP
port: 80
targetPort: 8080
And this is my service:
Now I need to access this API using my DNS, something like: myapi.localdns, so I followed this steps to install Ingress Controller based on nginx:
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
After 1 hour this is POD status in ingress-nginx namespace:
And finally, this is my Ingress yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: administrativo-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myapi.localdns
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: administrativo-service
port:
number: 80
Well, my idea is to create an entry in my company to DNS to point to this DNS myapi.localdns:
but to do it I need the Ingress Address, that don't show in my ingress resource, see:

I solved the problem, using this steps:
First create in my company DNS the CNAMEs pointing to my kubernetes workernode IP.
Reinstall ingress-nginx controller using bare-metal configuration: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/baremetal/deploy.yaml.
Change the deploy.yaml to use NodePort before use kubectl apply
Use externalIPS to expose my service in port 80.

Related

Cannot access to Kubernetes Ingress (Istio) on GKE

I set up Istio (Kubernetes Ingress mode, NOT Istio Gateway) on GKE. However, I cannot access from outside using curl
kubectl get svc -n istio-system | grep ingressgateway
istio-ingressgateway LoadBalancer 10.48.11.240 35.222.111.100
15020:30115/TCP,80:31420/TCP,443:32019/TCP,31400:31267/TCP,15029:30180/TCP,15030:31055/TCP,15031:32226/TCP,15032:30437/TCP,15443:31792/TCP
41h
curl 35.222.111.100
curl: (7) Failed to connect to 35.222.111.100 port 80: Connection
refused
This is the config of Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: in-keycloak
port:
number: 8080
This is the config of the Service:
apiVersion: v1
kind: Service
metadata:
name: in-keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
If I use the same config for Docker Desktop on local machine (MacOS), it works fine.
There are 2 things that must be met on GKE to make it work with istio on private cluster.
1.To make istio work on GKE you should follow these instructions to prepare a GKE cluster for Istio. It also inclused to open a 15017 port so istio could work.
For private GKE clusters
An automatically created firewall rule does not open port 15017. This is needed by the Pilot discovery validation webhook.
To review this firewall rule for master access:
$ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"
To replace the existing rule and allow master access:
$ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:15017
2.Comparing to istio documentation I would say your ingress is not properly configured, below you can find an ingress resource from the documentation you might try to use instead:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: istio
spec:
controller: istio.io/ingress-controller
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
ingressClassName: istio
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000

K8s service LB to external services w/ nginx-ingress controller

Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.

Ingress without ip address

I create a ingress to expose my internal service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
backend:
serviceName: my-app
servicePort: 80
But when I try to get this ingress, it show it has not ip address.
NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 80 10h
The service show under below.
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
Note: I'm guessing because of the other question you asked that you are trying to create an ingress on a manually created cluster with kubeadm.
As described in the docs, in order for ingress to work, you need to install ingress controller first. An ingress object itself is merely a configuration slice for the installed ingress controller.
Nginx based controller is one of the most popular choice. Similarly to services, in order to get a single failover-enabled VIP for your ingress, you need to use MetalLB. Otherwise you can deploy ingress-nginx over a node port: see details here
Finally, servicePort in your ingress object should be 3000, same as port of your service.

Nginx Ingress Failing to Serve

I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).

EKS to integrate Kubernetes ingress

Can anybody point me to the workflow that I can direct traffic to my domain through Ingress on EKS?
I have this:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
labels:
app: hello-world
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: hello-world
servicePort: 80
rules:
- host: DOMAIN-I-OWN.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- port: 80
targetPort: 32000
protocol: TCP
name: http
selector:
app: hello-world
And able to hit DOMAIN-I-OWN.com using minikube
kubectl config use-context minikube
echo "$(minikube ip) DOMAIN-I-OWN.com" | sudo tee -a /etc/hosts
But, I can't find tutorials how to do the same thing on AWS EKS?
I have set up EKS cluster and have 3 nodes running.
And have pods deployed with those Ingress and Service spec.
And let's say I own "DOMAIN-I-OWN.com" through Google domains or GoDaddy.
What would be the next step to set up the DNS?
Do I need ingress controller? Do I need install it separate to make this work?
Any help would be appreciated! Got stuck on this several days...
You need to wire up something like https://github.com/kubernetes-incubator/external-dns to automatically point DNS names to your cluster's published services' IPs.
take a look to https://github.com/kubernetes-sigs/aws-alb-ingress-controller. It provides a controller that watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources(subnets, security groups, elbs).
You can create hosted zone on was route 53 and add the records to Godaddy. If you use https://github.com/kubernetes-sigs/aws-alb-ingress-controller. After ingress is setup, you will get a cname add it to your route 53 record