expose pgbouncer service to external clients - kubernetes

I am trying to implement pgbouncer on k8s, using a helm chart created deployment,service…now how do I expose the service to outside world? Not much familiar with k8s networking, tried to create an ingress resource and it created an elb in aws…how do I map this elb to the service and expose it?
the service is created with type ClusterIP…the service is a tcp service i.e. not http/https application (edited)
The helm chart used is - https://github.com/futuretechindustriesllc/charts/tree/master/charts/pgbouncer

Ingresses are only used for HTTP and friends. In this case what you want is probably a LoadBalancer type service. That will make a balancer fabric and then expose it via an ELB.

Related

Exposing Service from a BareMetal(Kubeadm) Kubernetes Cluster to outside world

Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
For that, I have created a deployment and service for NGINX as shown below,
As per my search, found that we have below to expose to outside world
MetalLb
Ingress NGINX
Some HELM resources
I would like to know all these 3 or any more approaches in such way it help me to learn new things.
GOAL
Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
How Can I make my service has its own public IP to access from the outside cluster?
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
I have done a similar setup you can check it here in detail:
https://developerdiary.me/lets-build-low-budget-aws-at-home/
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.

Get Externally accessible IP address of Pod in Kubernetes

I need to create two instances using the same Ubuntu Image in Kubernetes. Each instance used two ports i.e. 8080 and 9090. How can I access these two ports externally? Can we use the IP address of the worker in this case?
If you want to access your Ubuntu instances from outside the k8s cluster you should place pods behind the service.
You can access services through public IPs:
create Service of type NodePort- the service will be available on <NodeIp>:<NodePort>
create Service of type LoadBalancer - if you are running your workload in the cloud creating service of type LoadBalancer will automatically deploy LoadBalancer for you.
Alternatively you can deploy Ingress to expose your Service. You would also need Ingress Controller.
Useful links:
GCP example
Ingress Controller
Ingress
Kubernetes Service

GKE: ingres with sub-domain

I used to work with Openshift/OKD cluster deployed in AWS and there it was possible to connect cluster to some domain name from Route53. Then as soon as I was deploying ingress with some hosts mappings (and the hosts defined in ingres were subdomains of the basis domain) all necessary lb rules (Routes in Openshift) and subdomain itself were created by Openshift and were directly available. For example: Openshift is connected to domain "somedomain.com" which is registered in Route53. In ingress I have the host mapping like:
hosts:
- host: sub1.somedomain.com
paths:
- path
After deployment I can reach sub1.somedomain.com. Is this kind of functionality available in GKE?
So far I have seen only mapping to static IP.
Also I red here https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 that if I need to connect service with ingress, the service have to be of type NodePort. Is it realy so? In Openshift it was not required any normal ClusterIP service could be connected to ingress.
Thanks in advance!
I think you should consider the other Ingress Controllers for your use cases.
I'm not an expert of the GKE, but as I can see Best practices for enterprise multi-tenancy as follows,
you need to consider how to route the multiple Ingress hostnames through wildcard subdomain like the OpenShift additionally.
Set up HTTP(S) Load Balancing with Ingress
:
You can create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress resource,
which defines how traffic reaches your Services and how the traffic is routed to your tenant's application.
By registering Services with the Ingress resource, the Services' naming convention becomes consistent,
showing a single ingress, such as tenanta.example.com and tenantb.example.com.
The routing feature depends on the Ingress Controllers basically.
In my finding, the default Ingress Controllers of the GKE just creates a Google Cloud HTTP(S) Load Balancer, but it does not consider multi-tenancy by default like the OpenShift.
In contrast, in the OpenShift, the Ingress Controller was implemented using HAProxy with dynamic configuration feature as follows.
LB -tenanta.example.com--> HAProxy(directly forward the tenanta.example.com traffic to the target pod IPs) ---> Target Pods
The type of service exposition depends on the K8S implementation on each cloud provider.
If the ingress controller is a component inside your cluster, a ClusterIP is enough to have your service reachable (internally from inside the cluster itself)
If the ingress definition configure an external element (in case of GKE, a load balancer), this element isn't a part of the cluster and can't know the ClusterIP (because it is only accessible internally). A node port is required in this case.
So, in your case, either you expose your service in NodePort, or you configure GKE with another Ingress controller, locally installed in the cluster, instead of using this one by default.
So far GKE does not provide the possibility to dynamically create subdomains. The wished situation would be if GKE cluster can be set some DNS zone managed in GCP and there is a mimik of OpenShift Routes using for example ingress annotations.
But the reality tight now - you have to create subdomain or domain youself as well as IP address wich you connect this domain to. And this particular GCP IP address (using name) can be connected to ingress using annotations. Or it can be used in loadbalancer service.

In Rancher 2.0 Kubernetes, ClusterIP mode service is not served in round robin fashion without Loadbalancer ingress

What I have:
I have created one Kubernetes cluster using single node Rancher 2.0 deployment. which has 3 etcd, control nodes & 2 worker nodes attached to cluster.
What I did:
I deployed one API gateway to this cluster & one express mydemoapi service (no db) with 5 pods on 2 nodes on port 5000, which I don't want to expose publicly. So, I just mapped that service endpoint with service name in API gateway http:\\mydemoapi:5000 & it was accessible by gateway public endpoint.
Problem statement:
mydemoapi service is served in random fashion, not in round robin, because default setting of kube-proxy is random as per Rancher documentation Load balancing in Kubernetes
Partial success:
I created one ingress loadbalancer with Keep the existing hostname option in Rancher rules with this URL mydemoapi.<namespace>.153.xx.xx.102.xip.io & attached this service to ingress, it is served in round robin fashion, but having one problem. This service was using xip.io with public ip of my worker node & exposed publicly.
Help needed:
I want to map my internal clusterIP service into gateway with internal access, so that it can be served to gateway internally in round robin fashion and hence to gateway public endpoint. I don't want to expose my service publicly without gateway.
Not sure which cloud you are running on, but if you are running in something like AWS you can set the following annotation to true on your Service definition:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Other Cloud providers have similar solutions and some don't even have one. In that case, you will have to use a NodePort service and redirect an external load balancer such as one with haproxy or nginx to forward traffic to that NodePort
Another option is to not use an Ingress at all if you want to do round robin between your services is to change your kube-proxy configs to use either the old namespace proxy mode or the more enhanced ipvs proxy mode.

Kubernetes External Load Balancer Service on DigitalOcean

I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP.
Things have changed, DigitalOcean created their own cloud provider implementation as answered here and they are maintaining a Kubernetes "Cloud Controller Manager" implementation:
Kubernetes Cloud Controller Manager for DigitalOcean
Currently digitalocean-cloud-controller-manager implements:
nodecontroller - updates nodes with cloud provider specific labels and
addresses, also deletes kubernetes nodes when deleted on the cloud
provider.
servicecontroller - responsible for creating LoadBalancers
when a service of Type: LoadBalancer is created in Kubernetes.
To try it out clone the project on your master node.
Next get the token key from https://cloud.digitalocean.com/settings/api/tokens and run:
export DIGITALOCEAN_ACCESS_TOKEN=abc123abc123abc123
scripts/generate-secret.sh
kubectl apply -f do-cloud-controller-manager/releases/v0.1.6.yml
There more examples here
What will happen once you do the above? DO's cloud manager will create a load balancer (that has a failover mechanism out of the box, more on it in the load balancer's documentation
Things will change again soon as DigitalOcean are jumping on the Kubernetes bandwagon, check here and you will have a choice to let them manage your Kuberentes cluster instead of you worrying about a lot of the infrastructure (this is my understanding of the service, let's see how it works when it becomes available...)
The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean (supported cloud providers), so the LoadBalancer type will not be able to take advantage of Digital Ocean's Floating IPs.
Instead, you should consider using a NodePort service or attaching an ExternalIP to your service and mapping the exposed IP to a DO floating IP.
It is actually possible to expose a service through a floating ip. The only catch is that the external IP that you need to use is a little unintuitive.
From what it seems DO has some sort of overlay network for their Floating IP service. To get the actual IP you need to expose you need to ssh into your gateway droplet and find its anchor IP by hitting up the metadata service:
curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
and you will get something like
10.x.x.x
This is the address that you can use as an external ip in LoadBalancer type service in kubernetes.
Example:
kubectl expose rc my-nginx --port=80 --public-ip=10.x.x.x --type=LoadBalancer