Charmed Distribution Of Kubernetes on LXC and Ingress - kubernetes

I am trying to get a decent solution of exposing my services from a Kubernetes cluster hosted on local LXC containers.
The setup is as follows:
Host: Ubuntu 18.04 running a LXC cluster.
Inside the LXC there is a Charmed Distribution Of Kubernetes which is running my apps and another container running a NGINX reverse proxy.
I've also setup a Metallb load-balancer inside kubernetes and use all k8s services which needs internet exposing as LoadBalancer:
apiVersion: v1
kind: Service
metadata:
namespace: blazedesk
name: blazedesk-sdeweb-server
labels:
app: blazedesk
spec:
ports:
- port: 80
targetPort: 80
name: "http"
- port: 443
targetPort: 443
name: "https"
selector:
app: blazedesk
tier: sdeweb-server
type: LoadBalancer
How I did it so far was to redirect all http and https traffic coming to the main host to NXGINX reverse-proxy:
lxc config device add proxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 proxy_protocol=true
lxc config device add proxy myport443 proxy listen=tcp:0.0.0.0:443 connect=tcp:127.0.0.1:443 proxy_protocol=true
Nginx is then configured to redirect traffic matching DNS addresses to k8s services external-ips:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/blazedesk-sdeweb-server LoadBalancer 10.152.183.215 10.190.26.240 80:31476/TCP,443:31055/TCP 17d
proxy_pass https://10.190.26.240;
As you can imagine, this setup implies lots of manual work, especially if the k8s services are restarted and new ips are allocated by metallb loadbalancer.
Is there a simpler way to redirect the traffic from the hosts directly to a kubernetes ingress, somehow bypassing LXC layer?

I actually made it work with an NGINX ingress controller exposed as a LoadBalancer service and redirect http and https traffic from host, using iptables, to the ingress external-ip.

Related

Minikube: access private services using proxy/vpn

I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.

Docker for Mac(Edge) - Kubernetes - LoadBalancer

It is so cool that we have a LoadBalancer in Docker for Mac.
I have a question regarding ports created:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
ports:
port: 9999
targetPort: 80
selector:
run: nginx
type: LoadBalancer
This gives me(kubectl get service):
nginx LoadBalancer 10.96.128.253 localhost 9999:32455/TCP 2s
What is 32455?
Thanks
32455 is your nodePort. Kubernetes automatically assigns a unique nodePort for any service that is accessible outside of a cluster (including services of type LoadBalancer. You can specify these yourself as well in the same config, as long as you .
With regards to Docker for Mac specifically, Kubernetes is creating a service which is listening on localhost:9999. This is an "egress" that kubernetes is creating since you don't actually have a load balancer, it's essentially simulating one. Beyond the "load balancer/egress", it still behaves just like it would in production - that is Kubernes assigns a nodePort for the service. You curl localhost:32455, you will likely get the same response as if you had curl localhost:9999.

How to access kubernetes service externally on bare metal install

Trying to make a bare metal k8s cluster to provide some services and need to be able to provide them on tcp port 80 and udp port 69 (accessible from outside the k8s cluster.) I've set the cluster up using kubeadm and it's running ubuntu 16.04. How do I access the services externally? I've been trying to use load-balancers and ingress but am having no luck since I'm not using an external load balancer (Local rather than AWS etc.)
An example of what I'm trying to do can be found here but it's using GCE.
Thanks
Service with NodePort
Create a service with type NodePort, Service can be listening TCP/UDP port 30000-32767 on every node. By default, you can not simply choose to expose a Service on port 80 on your nodes.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 31000
- portocol: UDP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 32000
type: NodePort
The container image gcr.io/google_containers/proxy-to-service:v2 is a very small container that will do port-forwarding for you. You can use it to forward a pod port or a host port to a service. Pods can choose any port or host port, and are not limited in the same way Services are.
apiVersion: v1
kind: Pod
metadata:
name: dns-proxy
spec:
containers:
- name: proxy-udp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "udp", "53", "kube-dns.default", "1" ]
ports:
- name: udp
protocol: UDP
containerPort: 53
hostPort: 53
- name: proxy-tcp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "tcp", "53", "kube-dns.default" ]
ports:
- name: tcp
protocol: TCP
containerPort: 53
hostPort: 53
Ingress
If there are multiple services sharing same TCP port with different hosts/paths, deploy the NGINX Ingress Controller, which listening on HTTP 80 and HTTPS 443.
Create an ingress, forward the traffic to specified services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
If I was going to do this on my home network I'd do it like this:
Configure 2 port forwarding rules on my router, to redirect traffic to an nginx box acting as a L4 load balancer.
So if my router IP was 1.2.3.4 and my custom L4 nginx LB was 192.168.1.200
Then I'd tell my router to port forward:
1.2.3.4:80 --> 192.168.1.200:80
1.2.3.4:443 --> 192.168.1.200:443
I'd follow this https://kubernetes.github.io/ingress-nginx/deploy/
and deploy most of what's in the generic cloud ingress controller (That should create an ingress controller pod, an L7 Nginx LB deployment and service in the cluster, and expose it on nodeports so you'd have a NodePort 32080 and 32443 (note they would actually be random, but this is easier to follow)) (Since you're working on bare metal I don't believe it'd be able to auto spawn and configure the L4 load balancer for you.)
I'd then Manually configure the L4 load balancer to load balance traffic coming in on port 80 ---> NodePort 32080
port 443 ---> NodePort 32443
So betweeen that big picture of what do do and the following link you should be good. https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
(Btw this will let you continue to configure your ingress with the ingress controller)
Note: I plan to setup a bare metal cluster in my home closet in a few months so let me know how it goes!
If you have just one node deploy the ingress controller as a daemonset with host port 80. Do not deploy a service for it
If you have multiple nodes; with cloud providers a load balancer is a construct outside the cluster that's basically an HA proxy to each node running pods of your service on some port(s). You could do this kind of thing manually, for any service you want to expose set type to NodePort with some port in the allowed range (somewhere in the 30k) and spinup another VM with a TCP balancer (such as nginx) to all your nodes on that port. You'll be limited to running as many pods as you have nodes for that service

Whitelist/Filter incoming ips for https load balancer

I use Google Container Engine with Kubernetes.
I have created an https load balancer which terminates ssl and forwards traffic to k8s cluster nodes. The problem is I see no option to whitelist/filter incoming ip addresses. Is there any?
It sounds like you've set up a load balancer outside of Kubernetes. You may want to consider using a Kubernetes Service set to type: LoadBalancer. That type of service will give you an external IP that load balances to all of your Pods and can be easily restricted to whitelist IPs using the loadBalancerSourceRanges setting. Here is the example from the docs at https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerIP: 79.78.77.76
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
If you're using gce controller is not yet possible[1], just nginx controller[2] accept whitelist ip.
[1] https://github.com/kubernetes/ingress/issues/566
[2] https://github.com/kubernetes/ingress/blob/188c64aaac17ef29400e0f143b9aed7770e32fee/controllers/nginx/configuration.md#whitelist-source-range

Assign an External IP to a Node

I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.