Is it possible or right to configure Nginx Ingress Controller on Openshift cluster? - kubernetes

I have Kubernetes cluster setup available on my system. I have verified that my definitions for Ingress controller and Ingress resources working over it as I am able to invoke services inside my K8 cluster from outside the cluster.
Now, I have to move all my resources to Openshift cluster. I just deployed my definitions of Ingress resources from K8 to Openshift but those are not working, as I am not able to access services from outside the cluster. Note that I didn't deploy Ingress controller on Openshift which I did on K8.
The problem is, I don't want to use Openshift Route as an Ingress alternate. So, in order to get my Ingress resource working on OpenShift(OC) cluster what am I supposed to do? Shall I install Ingress controller on Openshift like I did on K8 cluster?
I don't want to use any Openshift specific resource on Openshift cluster, just wanted to utilize embedded Kubernetes of it.
Kindly suggest.

Related

Uisng Nginx Ingress to implement Blue-Green Deployment between two AKS clusters

please i wanted to find out if it is possible to implement a blue-green deployment for my application on separate AKS clusters using nginx ingress controller.
I have the current application (blue) running on one AKS cluster and i have a new AKS cluster (Green, with a new k8s version) where i want to migrate my workloads to.
Is it possible to implement the blue-green deployment strategy between these 2 AKS clusters using nginx ingress controller? If so, please can someone give me a suggestion on how this can be implemented? Thank you

How to configure Ingress Controller on Kubernetes Cluster installed with kubeadm

I installed a Kubernetes cluster with "kubeadm" on Hetzner Cloud.
After successful installation, I installed the Ingress Controller with Helm.
The EXTERNAL-IP of the ingress controller service is in the pending state.
The default type is LoadBalancer and as I know this type is only supported by cloud providers like AWS, Google...
So I changed the service type to NodePort.
How should I configure the external DNS to my services?
I don't want to append the 3.... ports but let the Ingress controller manage that for me.
Setting up ExternalDNS for Services on Hetzner DNS article provides efficient and work method to manage external DNS.
Main steps
1. Creating a Hetzner DNS zone
2. Creating Hetzner Credentials
3. Deploy ExternalDNS
4. Deploying an Nginx Service
5. Verifying Hetzner DNS records

what is an ingress controller and how do I create it?

Good morning guys, so I took down a staging environment for a product on GCP and ran the deployment scripts again, the backend and frontend service have been setup. I have an ingress resource and a load balancer up, however, the service is not running. A look at the production app revealed there was something like an nginx-ingress-controller. I really don't understand all these and how it was created. Can someone help me understand because I have not seen anything online that makes it clear for me. Am I missing something?
loadBalancer: https://gist.github.com/davidshare/5a571e56febe7dacd580282b373f3095
Ingress Resource: https://gist.github.com/davidshare/d0f53912bc7da8310ec3d64f1c8a44f1
Ingress allows access to your Kubernetes services from outside the Kubernetes cluster. There are different kubernetes aka K8 resources alternatively you can use like (Node Port / Loadbalancer) which you can use to expose.
Ingress is independent resource to your service , you can specify routing rules declaratively, so each url with some context can be mapped to different services.
This makes it decoupled and isolated from the services you want to expose.
So to work ingress it needs an Ingress Controller for your cluster.
Like deployment resource in K8, ingress can be created simply by
kubectl create -f ingress.yaml
First, you have to implement Ingress Controller in order to apply Ingress resource, as described in #Shubhu answer. Ingress controller, as an edge router, applies specific logical structure with aim to route external traffic to your Kubernetes cluster underlying services via basic pattern routing rules defined in Ingress resource.
If you select Nginx Ingress Controller then it might be useful to proceed with installation guide approaching some specific prerequisites based on cloud provider environment. In order to simplify Nginx Ingress controller installation procedure it is also possible to use Helm package manager and install appropriate stable/nginx-ingress Helm chart.

Accessing services without pod (without istio envoy) from outside cluster through istio ingress rules in K8s

Steps:
1. I have created 2 namespaces (ns1 and ns2).
2. in ns1, i have deployed service where envoy proxy is enabled (istioctl kube-inject service.yaml)
2. in ns1, i have created istio ingress rules pointing to the service and i am able access it from outside the cluster.
3. in ns2, i havnt deploy any service because it is my shared namespace hence i have created headless service (External Name) which is pointing to the service deployed in ns1 namespace.
The problem is; i am not able to access service which is deployed in ns2 from outside the cluster. it is throwing 404 service not found.
did i miss anything here... do we have any other solution to address this?
Thanks,
Nikhil

What is the glue between k8 ingress and google load bancers

I am using kubernetes on google cloud container, and I still don't understand how the load-balancers are "magically" getting configured when I create / update any of my ingresses.
My understanding was that I needed to deploy a glbc / gce L7 container, and that container would watch the ingresses and do the job. I've never deployed such container. So maybe it is part of this cluster addon glbc, so it works even before I do anything?
Yet, on my cluster, I can see a "l7-default-backend-v1.0" Replication Controller in kube-system, with its pod and NodePort service, and it corresponds to what I see in the LB configs/routes. But I can't find anything like a "l7-lb-controller" that should do the provisionning, such container does not exist on the cluster.
So where is the magic ? What is the glue between the ingresses and the LB provisionning ?
Google Container Engine runs the glbc "glue" on your behalf unless you explicitly request it to be disabled as a cluster add-on (see https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing).
Just like you don't see a pod in the system namespace for the scheduler or controller manager (like you do if you deploy Kubernetes yourself), you don't see the glbc controller pod either.