Can PostgreSQL service in OpenShift cluster receive external traffic via exposed route - postgresql

Is it possible to run PostgreSQL as a service inside an OpenShift cluster and get external traffic to it via an exposed route (the recommended default way for communicating from the outside)?
The OpenShift 3.9 documentation states this:
A router is configured to accept external requests and proxy them
based on the configured routes. This is limited to
HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications.
PostgreSQL can to SSL and it can be configured to listen on port 443 (HTTPS), but I think it cannot do SNI yet. I would only run a single pod behind the service, so load balancing should not cause issues. If feasible, I would expect to connect into the cluster and to the service with psql -h ....

You can create a LoadBalancer service which will create a load balancer dedicated to just your pods, and this load balancer accepts TCP based traffic: https://docs.openshift.com/container-platform/3.9/admin_guide/tcp_ingress_external_ports.html#unique-external-ips-ingress-traffic-configure-service
So something like
oc expose dc postgres --type=LoadBalancer --name=postgres-lb
Would get you a publicly accessible PostgreSQL DB

I have come to the (preliminary) conclusion that OpenShift port forwarding (oc port-forward) offers the best way (well ... :) forward in this situation.

Related

Connecting to many kubernetes services from local machine

From my local machine I would like to be able to port forward to many services in a cluster.
For example I have services of name serviceA-type1, serviceA-type2, serviceA-type3... etc. None of these services are accessible externally but can be accessed using the kubectl port-forward command. However there are so many services, that port forwarding to each is unfeasible.
Is it possible to create some kind of proxy service in kubernetes that would allow me to connect to any of the serviceA-typeN services by specifying the them in a URL? I would like to be able to port-forward to the proxy service from my local machine and it would then forward the requests to the serviceA-typeN services.
So for example, if I have set up a port forward on 8080 to this proxy, then the URL to access the serviceA-type1 service might look like:
http://localhost:8080/serviceA-type1/path/to/endpoint?a=1
I could maybe create a small application that would do this but does kubernetes provide this functionality already?
kubectl proxy command provides this functionality.
Read more here: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
Good option is to use Ingrees to achieve it.
Read more about what Ingress is.
Main concepts are:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In Kubernetes we have 4 types of Services and the default service type is Cluster IP which means the service is only reachable within the cluster.Ingress exposes your service outside the cluster so ingress acts as the entry point into your cluster.
If you plan to move to cloud (I assume you will, because all applications are going to work in cloud in future) with Ingress, it will be compatible with cloud services and eventually will save time and will be easier to migrate from local environment.
To start with ingress you need to install an Ingress controller first.
There are different ingress controllers which you can use.
You can start with most common ingress-nginx which is supported by kubernetes community.
If you're using a minikube than it can be enabled as an addon - see here
Once you have installed ingress in your cluster, you need to create a rule to have it work. Simple fanout is an example with two services and path based routing to it.

How to expose 80 and 443 to the Internet with Kubernetes like I did with docker-compose?

For what I understand, exposing pods or deployments can be achieved with a NodePort or a ClusterIP or a LoadBalancer service.
Coming from the docker-compose world, my stack was quite easy, I have all my applications running on a private Docker network and a reverse proxy (Caddy or NGINX) running as well but with the only port mapping allowed in my stack : :80 and :443 (and it can allow reach the private network of course).
So basically : Internet ----> Caddy ----> Private applications in a docker-compose stack.
Q1 : How can I do such things with Kubernetes in a "bare-metal" context ? I mean, if I do not want to use a cloud load balancer provider ?
Q2 : Is it because Kubernetes has never been built to expose applications like this ? Is it automatically dependent to a cloud provider ?
Q1: You might want to look in to https://metallb.universe.tf/. It's a loadbalancer that works just as the common cloud loadbalancers but on your local cluster. It's fairly easy to setup and works great with any reverse proxy.
Q2: Kubernetes is primarily developed for cloud environments and is definately is easier to run in that context. To run it locally often requires additional tools to replicate the functionallity of its cloud service counterparts.
This depends on how your networking is setup. Kubernetes typically run in its own network, and now you want traffic outside the cluster to access applications within the cluster - typically over a "gateway" / "proxy".
For http and https traffic, it is common that this "gateway" / "proxy" is a reverse proxy configured according to the Ingress resources in the cluster by an Ingress controller. You need to use an Ingress Controller that support your network setup.

How to access Kubernetes pod in local cluster?

I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).

Can reverse proxy in Service Fabric be used with multiple windows containers?

I'm evaluating using SF or docker swarm for container orchestration and I can see service fabric has an edge by being able to use reverse proxy implementation which runs on all nodes in cluster. Problem is that I can see that based on cluster manifest only one port can be used as reverse proxy port and hence I'm not fully understanding how this can be utilized if you have multiple windows containers running with each of those running on their own port. I need to use port:port mapping only (with no HTTP rewrite), so ultimately wanted one to one reverse port mapping to each individual windows container running.
Is it possible to accomplish by using service fabric?
To be clear I have www.app1.com and www.app2.com hosted in 2 different containers, they don't need to talk to each other. I deploy those to service fabric, how do I use reverse proxy with single published external port to reach those containers externally?
At this point in time (version 5.6 of Service Fabric), Reverse Proxy will do the service resolution using the Service Fabric naming service and provide the URI to get to your service. The URL that reverse proxy will find your service on is specific to Service Fabric - e.g. http://clusterFQDN/appName/serviceName:port.
What you can use the DNS Service to get you a container IP (the IP of a host node in the cluster, running your container). However, you can only find the port by doing a DNS SRV record lookup.
Current best options for exposing containers in a Service Fabric cluster are:
If you have a fixed host port for your container, the Azure load balancer will be able to monitor where the container lives, and forward requests to only those nodes. You can add additional public IPs to your Load Balancer and use one per container. Cannot be used with dynamic host ports in the cluster.
Azure API Management can resolve Service Fabric services by integrating with the Service Fabric Naming Service.
Create your own HTTP Gateway as a Reliable Service: https://github.com/weidazhao/Hosting or https://github.com/c3-ls/ServiceFabric-Http
Running Nginx as a service in the cluster: Based on this prototype you can run and configure Nginx in Service Fabric: https://github.com/knom/ServiceFabric-Nginx
Yes you can use Reverse proxy with multiple containers. The idea is simple
Configure port to host mapping so your host knows which port your
application is listening
Configure container to container so your
container register a end point with service fabric. You can choose
the port for this endpoint. This will be registered with Naming
service and available for reverse proxy
Communication between containers can be done using reverse proxy using the service name and the port you specified. if you didn't specified the port number then service fabric will assign one for you and you can get it using environment variable.
Service Fabric team have excellent documentation about this here
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-container-linux

Hitting an endpoint of HeadlessService - Kubernetes

We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?
It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.
One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.
You can setup node ports via:
the services: https://kubernetes.io/docs/user-guide/services/#type-nodeport
or directly in the Pod by defining spec.containers[].ports[].hostPort
Another alternative is to use a LoadBalancer, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.