access postgres in kubernetes from an application outside the cluster - postgresql

Am trying to access postgres db deployed in kubernetes(kubeadm) on centos vms from another application running on another centos vm. I have deployed postgres service as 'NodePort' type. My understanding is we can deploy it as LoadBalancer type only on cloud providers like AWS/Azure and not on baremetal vm. So now am trying to configure 'ingress' with NodePort type service. But am still unable to access my db other than using kubectl exec $Pod-Name on kubernetes master.
My ingress.yaml is
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: postgres-ingress
spec:
backend:
serviceName: postgres
servicePort: 5432
which does not show up any address as below
NAME HOSTS ADDRESS PORTS AGE
postgres-ingress * 80 4m19s
am not even able to access it from pgadmin on my local mac. Am I missing something?
Any help is highly appreciated.

Ingress won't work, it's only designed for HTTP traffic, and the Postgres protocol is not HTTP. You want solutions that deal with just raw TCP traffic:
A NodePort service alone should be enough. It's probably the simplest solution. Find out the port by doing kubectl describe on the service, and then connect your Postgres client to the IP of the node VM (not the pod or service) on that port.
You can use port-forwarding: kubectl port-forward pod/your-postgres-pod 5432:5432, and then connect your Postgres client to localhost:5432. This is my preferred way for accessing the database from your local machine (it's very handy and secure) but I wouldn't use it for production workloads (kubectl must be always running so it's somewhat fragile and you don't get the best performance).
If you do special networking configuration, it is possible to directly access the service or pod IPs from outside the cluster. You have to route traffic for the pod and service CIDR ranges to the k8s nodes, this will probably involve configuring your VM hypervisors, routers and firewalls, and is highly dependent on what networking (CNI) plugin are you using for your Kubernetes cluster.

Related

Clean way to connect to services running on the same host as the Kubernetes cluster

I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>

ignite CommunicationSpi questions in PAAS environment

My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.

Kubernetes nginx ingress accesses outside of cluster without using service

Apologies if this has been answered before, but I am a little confused on Ingress Nginx is working together with services.
I am trying to implement an nginx ingress in my Kubernetes environment.
So far I have an ingress-nginx-controller-deployment setup, as well as a deployment and service for the default backend. I still need to create my actual Ingress resources, the ingress-nginx-controller-service and also my backend.
curl <NodeIP>
returns "default backend 404" on port 80 for the Node which the ingress-nginx-controller-deployment is deployed on.
However, my understanding is that exposing anything out of the cluster requires a service (Nodeport/Loadbalancer), which is the duty of the ingress-nginx-controller-service.
My question is how is this possible, that I can access port 80 for my Node on my browser, which is outside the cluster?
Could I then deploy my backend app on port 80 the same way the above is done?
I feel like I am misunderstanding a key concept here.
default backend image: gcr.io/google_containers/defaultbackend:1.0
nginx-controller image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3
I think you missed a really good article about how nginx-ingress expose to the world!
I short:
If you're using hostNetwork: true then you bypass the kubernetes network (kube-proxy). in a simple word, you bypass the container and orchestration network and just using the host network then the node with nginx-ingress container will expose port 80 to the world.
There are other ways that you can use to expose nginx port to outside of the cluster (node-port, network load balancer like MetalLB).

How can I write a minimal NetworkPolicy to firewall a Kubernetes application with a Service of type LoadBalancer using Calico?

I have a Kubernetes cluster running Calico as the overlay and NetworkPolicy implementation configured for IP-in-IP encapsulation and I am trying to expose a simple nginx application using the following Service:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx
I am trying to write a NetworkPolicy that only allows connections via the load balancer. On a cluster without an overlay, this can be achieved by allowing connections from the CIDR used to allocate IPs to the worker instances themselves - this allows a connection to hit the Service's NodePort on a particular worker and be forwarded to one of the containers behind the Service via IPTables rules. However, when using Calico configured for IP-in-IP, connections made via the NodePort use Calico's IP-in-IP tunnel IP address as the source address for cross node communication, as shown by the ipv4IPIPTunnelAddr field on the Calico Node object here (I deduced this by observing the source IP of connections to the nginx application made via the load balancer). Therefore, my NetworkPolicy needs to allow such connections.
My question is how can I allow these types of connections without knowing the ipv4IPIPTunnelAddr values beforehand and without allowing connections from all Pods in the cluster (since the ipv4IPIPTunnelAddr values are drawn from the cluster's Pod CIDR range). If worker instances come up and die, the list of such IPs with surely change and I don't want my NetworkPolicy rules to depend on them.
Calico version: 3.1.1
Kubernetes version: 1.9.7
Etcd version: 3.2.17
Cloud provider: AWS
I’m afraid we don’t have a simple way to match the tunnel IPs dynamically right now. If possible, the best solution would be to move away from IPIP; once you remove that overlay, everything gets a lot simpler.
In case you’re wondering, we need to force the nodes to use the tunnel IP because, if you’re suing IPIP, we assume that your network doesn’t allow direct pod-to-node return traffic (since the network won’t be expecting the pod IP it may drop the packets)

Access an application running on kubernetes from the internet

I'm pretty sure that this is a basic use case when running apps on kubernetes, but till now I wasn't able to find a tutorial, nor understand from the documentation, how to make it work.
I have an application, which is listening on a port 9000. So when run on my localhost, I can access it through a web browser on a localhost:9000. When run in a docker container, which is running on my VPS, it's also accessible on myVPSAddress:9000. Now the question is, how to deploy it on kubernetes running on the very same Virtual Private Server and expose the application to be visible as well, as when deployed on docker. I can access the application from within the VPS on the address of the cluster, but not on the IP address of the server itself. Can somebody show me some basic dockerfile with a description what is it doing or show me some idiot-proof way, how to make it work? Thanks
While one would think that this is a very basic use-case, that is not the case for people running their own kubernetes clusters on bare metal servers. (The way you are on your VPS).
The recommended way of exposing an application to "the world" is to use kubernetes services, see this piece of documentation about exposing services. You define a kubernetes service, either of the type NodePort or of type Loadbalancer *.
Here is what a dead simple service looks like (hint: it's of the default type NodePort):
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 9000
targetPort: 9376
This will expose your service with label name: my-service (interally running on port 9000) on all nodes in your VPS cluster at port 9376.
Assuming your nodes have a public IP (which from your question I assume they do), you can safely do curl localhost:9376.
Because this is usually not ideal UX/UI to expose to users, people use services of type Loadbalancer. This service type provides a unique IP to each of your services instead of a port.
These services are first class citizens on cloud managed clusters, such as Google's GKE, but if you run your own Kubernetes cluster (setup using say kubeadm), then you need to deploy your Loadbalancer service provider. I've used the excellent MetalLB and it works flawlessly once it's been setup, but you need to set it up yourself. If you want dns names for you services as well, you should also look at ExternalDNS.
* Caveat here is that you can also use a service of type ExternalIP if you can somehow make that IP routable, but unless the network is in your control, this is usually not a feasible approach, and I'd recommend looking at an LB provider instead.