how to expose ingress for Consul - kubernetes

I'm trying to add consul ingress to my project, and I'm using this GitHub repo as a doc for ui and ingress: here and as you can see unfortunately there is no ingress in doc, there is an ingressGateways which is not useful because doesn't create ingress inside Kubernetes(it can just expose URL to outside)
I have searched a lot, there are 2 possible options:
1: create extra deployment for ingress
2: create consul helm chart to add ingress deploy
(unfortunately I couldn't find a proper solution for this on the Internet)

Here is an example Docker compose file which configures Traefik to expose an entrypoint named web which listens on TCP port 8000, and integrates Traefik with Consul's service catalog for endpoint discovery.
# docker-compose.yaml
---
version: "3.8"
services:
consul:
image: consul:1.8.4
ports:
- "8500:8500/tcp"
traefik:
image: traefik:v2.3.1
ports:
- "8000:8000/tcp"
environment:
TRAEFIK_PROVIDERS_CONSULCATALOG_CACHE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_STALE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_ENDPOINT_ADDRESS: http://consul:8500
TRAEFIK_PROVIDERS_CONSULCATALOG_EXPOSEDBYDEFAULT: 'false'
TRAEFIK_ENTRYPOINTS_web: 'true'
TRAEFIK_ENTRYPOINTS_web_ADDRESS: ":8000"
Below is a Consul service registration file which registers an application named web which is listening on port 80. The service registration includes a couple tags which instructs Traefik to expose traffic to the service (traefik.enable=true) over the entrypoint named web, and creates the associated routing config for the service.
service {
name = "web"
port = 80
tags = [
"traefik.enable=true",
"traefik.http.routers.web.entrypoints=web",
"traefik.http.routers.web.rule=Host(`example.com`) && PathPrefix(`/myapp`)"
]
}
This can be registered into Consul using the CLI (consul service register web.hcl). Traefik will then discover this via the catalog integration, and configure itself based on the routing config specified in the tags.
HTTP requests received by Traefik on port 8000 with an Host header of example.com and path of /myapp will be routed to the web service that was registered with Consul.
Example curl command.
curl --header "Host: example.com" http://127.0.0.1:8000/myapp
This is a relatively basic example that is suitable for dev/test. You will need to define additional Traefik config parameters if you are deploying into a production Consul environment which is typically secured by access control lists (ACLs).

The ingressGateways config in the Helm chart is for deploying a Consul ingress gateway (powered by Envoy) for Consul service mesh. This is different from a Kubernetes Ingress.
Consul's ingress enables routing to applications running inside the service mesh, and is configured using an ingress-gateway configuration entry (or in the future using Consul CRDs). It cannot route to endpoints that exist outside the service mesh, such as Consul's API/UI endpoints.
If you need a generic ingress that can route to applications outside the mesh, I recommend using a solution such as Ambassador, Traefik, or Gloo. All three of this also support integrations with Consul for service discovery, or service mesh.

Related

Kubernetes with cloud providers - How to route SSH trafic to services with Loadbalancers

I'm trying to build a Kubernetes cluster to allow multi-website testing, with multiple databases engine, multiple php versions, multiple dependancies, multiple front-end stacks, ...
So, my goal is to build something similar to this :
infrastructure schema
When using ingress-nginx, my cloud provider gives me a LoadBalancer IP.
I was able to deploy ingress-nginx to route my http/https trafic to the right service using ingress host rules.
Now, i want to be able to connect via SSH to the project1_ssh service with the loadbalancer ip, on port 2022, and to project2_ssh service with the same loadbalancer ip, on port 2023.
Can i achieve that ?
I'm note sure ingress-nginx will allow me to do that.
I successfully was able to connect to my ssh service declaring this kind of service :
kind: Service
apiVersion: v1
metadata:
name: ssh-service
spec:
selector:
app: project1
ports:
- port: 2300
targetPort: 23
type: LoadBalancer
But doing this, creates a new loadbalancerIp, and a new bill on the cloud provider.
I want to have only one LoadBalancer service.
Any suggestions ?
The idea is to run ~50 websites, each one in a pod.
Ok, i finally removed ingress-nginx, and switched to Traefik-V2, and i achieved what i wanted.
Now i will try to figure out if SNI can make it even more simple (one same port for all my ssh services, but the host called in my ssh request would be used to route the connection to the right service inside the cluster.
https://kupczynski.info/2019/05/21/traefik-sni.html
Will let you know if it finally works

Kubernetes - Expose Website using nginx-ingress

I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.

How to create https endpoint in Google Cloud from http based server for Kubernetes Engine?

I have been trying to create HTTPS endpoint in Google Cloud K8s environment.
I have built a flask application in Python that serves on the waitress production environment via port 5000.
serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)
I created a docker file and pushed this to the google cloud repository. Then, created a Kubernetes cluster with one workload containing this image. After, I exposed this via external IP by creating LoadBalancer. (After pushing the image to the Google repository, everything is managed through the Google Cloud Console. I do not have any configuration file, it should be through the Google Cloud Console.)
Now, I do have an exposed IP and port number to access my application. Let's say this IP address and the port is: 11.111.11.222:1111. Now, I can access this IP via Postman and get a result.
My goal is to implement, If it is possible, to expose this IP address via HTTPS as well, by using any google cloud resources. (redirection, creating ingress, etc)
So, in the end I want to reach the application through http://11.111.11.222:111 and https://11.111.11.222:111
Any suggestions?
A LoadBalancer translates to a network load balancer. You can configure multiple ports for this e.g. 80 and 443. Then your application must handle the TLS part.
The ingress resource creates an HTTP(S) LB
From the GKE perspective you can try to configure Ingress resource with HTTPS enabled:
Steps:
Create a basic flask app inside a pod (for example purposes only)
Expose an app via service object of type nodePort
Create a certificate
Create an Ingress resource
Test
Additional information (added by EDIT)
Create a basic flask app inside a pod (for example purposes only)
Below is a flask script which will respond with <h1>Hello!</h1>:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "<h1>Hello!</h1>"
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
By default it will respond on port 8080.
Link to an answer with above script.
Expose an app via service object of type nodePort
Assuming that deployment is configured correctly with working app inside, you can expose it via service object type of nodePort with following YAML definition:
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
selector:
app: ubuntu
ports:
- name: flask-port
protocol: TCP
port: 80
targetPort: 8080
Please make sure that:
selector is configured correctly
targetPort is pointing to port which is app is running on
Create a certificate
For Ingress object to work with HTTPS you will need to provide a certificate. You can create it with GKE official documentation on: Cloud.google.com: Managed certificates
Be aware of a fact that you will need a domain name to do that.
Create an Ingress resource
Below is an example Ingress resource which will point your requests to your flask application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: flask-ingress
annotations:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: flask-service
servicePort: flask-port
Please take a specific look on part of YAML definition below and change accordingly to your case:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
Please wait for everything to configure correctly.
After that you will have access to your application by domain.name with ports:
80(http)
443(https)
Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-- Kubernetes.io: Ingress TLS
Test
You can check if above steps are configured correctly by:
entering https://DOMAIN.NAME in your web browser and check if it responds with Hello with HTTPS enabled
using a tool curl -v https://DOMAIN.NAME.
Please let me know if this solution works for you.
Additional information (added by EDIT)
You can try to configure service object of type LoadBalancer which will be operate at layer 4 as #Florian said in his answer.
Please refer to official documentation: Kubernetes.io: Create external load balancer
You can also use Nginx Ingress controller and either:
Expose TCP/UDP service by following: Kubernetes.github.io: Ingress nginx: Exposing tcp udp services which will operating at L4.
Create an Ingress resource that will have SSL Passthrough configured by following: Kubernetes.github.io: Ingress nginx: Ssl passthrough
After researching, I found the answer in Google Cloud Run. It is very simple to deploy HTTP based flask app in the container. As serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)(No need for self-certificate or HTTPS in this part, just make sure the HTTP app works) and then push it Cloud Run.
Adjust the service parameters, depend on how much resources do you need to run it. In the machine settings, set the port that you are using in the docker container to be mapped. for instance, in my case, it is 5000. When you create the service, Google provides you a domain address with HTTPS. You can use that URL and access your resources.
That's it!
For more information on Cloud Run:
https://cloud.google.com/serverless-options
The differences between computing platforms: https://www.signalfx.com/blog/gcp-serverless-comparison/

Kubernetes pods can not make https request after deploying istio service mesh

I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....

Sock-shop on GCP with Loadbalancer

I am trying to deploy and access Sock-shop on Google Cloud Platform.
https://github.com/microservices-demo/microservices-demo
I was able to deploy it using the deployment script
https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml
Based on the tutorial here
https://www.weave.works/docs/cloud/latest/tasks/deploy/sockshop-deploy/
It says
Display the Sock Shop in the browser using:
<master-node-IP>:<NodePort>
But on GCP master node is hidden from the user.
So I changed the type from NodePort to LoadBalancer.
And I was able to get an external IP.
But it says the page cannot be found. enter code here
Do I need to set up more stuff for LoadBalancer?
I dont know If you solve the issue but I did it so I would like to share with you my solution that works for me.
You can do it through two ways:
1st) By creating a Load Balancer, where you expose the front-end service.
I assume that you have already created a namespace called sock-shop so any further command should specify and referred to that namespace.
If you type and execute the command:
kubectl get services --namespace=sock-shop
you should be able to see a list with all the services included a service called "front-end". So now you want to expose that service not as NodePort but as LoadBalancer. So, execute the command:
kubectl expose service front-end --name=front-end-lb --port=80 --target-port=8079 --type=LoadBalancer --namespace=sock-shop
After this give some time and you will able to access the Front end of the Sock Shop via public IP address (ephimeral)
2nd) More advanced way is by configuring an Ingress Load Balancer.
You need to configure another yaml file and put this code inside and run it as you did with the previous .yaml file.
nano basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace : sock-shop
name: basic-ingress
spec:
backend:
serviceName: front-end
servicePort: 80
kubectl apply -f basic-ingress.yaml --namespace=sock-shop
Locate the Public IP address through this command and after maximun 15mins you should be able to access the Sock Shop.
kubectl get ingress --namespace=sock-shop
I would recommend to return back for NodePort in the corresponded Service and create Ingress resource in your GCP cluster.
If you desire to access the related application from outside the cluster, Kubernetes provides Ingress mechanism to expose HTTP and HTTPS routes to your internal services.
Basically, HTTP(S) Load Balancer is created by default in GKE once Ingress resource has been implemented successfully, therefore it will take care for routing all the external HTTP/S traffic to the nested Kubernetes services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
You can check the External IP address for Load Balancer by the following command:
kubectl get ingress basic-ingress
I found this Article would be very useful in your common research.