I've created a private GKE cluster with Istio through the Cloud Console UI. The cluster is set up with VPC Peering to be able to reach another private GKE cluster in another Google Cloud Project.
I've created a Deployment (called website) with a Service in Kubernetes in the staging namespace. My goal is to expose this service to the outside world with Istio, using the Envoy proxy. I've created the necessary VirtualService and Gateway to do so, following this guide.
When running "kubectl exec ..." to access a pod in the private cluster, I can successfully connect to the internal IP address of the website service, and see the output of that service with "curl".
I have set up a NAT Gateway so pods in the private cluster can connect to the Internet. I confirmed this by curl-ing various non-Google web pages from within the website pod.
However, I can't connect to the website service from the outside, using the External IP of the istio-ingressgateway service, as the guide above mentions. Instead, curl-ing that External IP leads to a timeout.
I've put the full YAML config for all related resources in a private Gist, here: https://gist.github.com/marceldegraaf/0f36ca817a8dba45ac97bf6b310ca282
I'm wondering if I'm missing something in my config here, or if my use case is actually impossible?
Looking at your Gist I suspect the problem lies in the joining up of the Gateway to the istio-ingressgateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: website-gateway
namespace: staging
labels:
version: v1
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
In particular I'm not convinced the selector part is correct.
You should be able to do something like
kubectl describe po -n istio-system istio-ingressgateway-rrrrrr-pppp
to find out what the selector is trying to match in the Istio Ingress Gateway pod.
I had the same problem. On my case, the istio virtual service dont find my service.
Try this on your VirtualService:
route:
- destination:
host: website
port:
number: 80
From verifying all options, the only way to have the private GKE cluster with Istio to be exposed to externally is to use Cloud NAT.
Since the Master node within GKE is a managed service, there are current limits when using Istio with a private cluster. The only workaround that would accomplish your use case is to use Cloud NAT. I have also attached an article on how to get started using Cloud NAT here.
Related
I'm trying to build a Kubernetes cluster to allow multi-website testing, with multiple databases engine, multiple php versions, multiple dependancies, multiple front-end stacks, ...
So, my goal is to build something similar to this :
infrastructure schema
When using ingress-nginx, my cloud provider gives me a LoadBalancer IP.
I was able to deploy ingress-nginx to route my http/https trafic to the right service using ingress host rules.
Now, i want to be able to connect via SSH to the project1_ssh service with the loadbalancer ip, on port 2022, and to project2_ssh service with the same loadbalancer ip, on port 2023.
Can i achieve that ?
I'm note sure ingress-nginx will allow me to do that.
I successfully was able to connect to my ssh service declaring this kind of service :
kind: Service
apiVersion: v1
metadata:
name: ssh-service
spec:
selector:
app: project1
ports:
- port: 2300
targetPort: 23
type: LoadBalancer
But doing this, creates a new loadbalancerIp, and a new bill on the cloud provider.
I want to have only one LoadBalancer service.
Any suggestions ?
The idea is to run ~50 websites, each one in a pod.
Ok, i finally removed ingress-nginx, and switched to Traefik-V2, and i achieved what i wanted.
Now i will try to figure out if SNI can make it even more simple (one same port for all my ssh services, but the host called in my ssh request would be used to route the connection to the right service inside the cluster.
https://kupczynski.info/2019/05/21/traefik-sni.html
Will let you know if it finally works
I am learning Kubernetes and trying to deploy an app using MiniKube.
I have managed to expose the service mapped to nginx pod on Minikube IP. I can access the nginx service on url $(minikube ip):$(serviceport). which is fine, however I am looking to expose this to the public network. Currently this service is only accessible via my local machine, any other machine on my wifi network is not able to access it as it is exposed only on minikube ip. I dont want to forward the port in my local linux via IPtables, and I am looking for a built in solution to expose the port to world (and not just on minikube ip). I know it can be achieved as minikube dashboard by default expose the service on localhost, this implies that minikube can talk to other network adapters and can register the port, I am not sure how.
Here is my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginxservice
labels:
app: nginxservice
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 80
nodePort: 32756
selector:
app: nginxcontainer
#subudear is right - you need Ingress.
An API object that manages external access to the services in a
cluster, typically HTTP. Ingress may provide load balancing, SSL
termination and name-based virtual hosting.
Ingress exposes HTTP and
HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress
resource.
To be able use regularly use ingress(Im not talking about minikube right now) - it is not enough simply create Ingress object. You should first install related ingress controller.
There are lot of them, most popular are:
NGINX Ingress Controller
Kubernetes Nginx Ingress Controller
Traefik
Istio Ingress Controller
First 2 are very similar, but use absolutely different annotations. It often happens people confuse them
Talking about minikube:
As per guidelines, in order to install ingress the only you have to do is
minikube addons enable ingress
Please note that by default, minikube installing exactly NGINX Ingress controller
nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m
You have to create ingress.
Follow the steps in this doc - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.
I am trying to deploy and access Sock-shop on Google Cloud Platform.
https://github.com/microservices-demo/microservices-demo
I was able to deploy it using the deployment script
https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml
Based on the tutorial here
https://www.weave.works/docs/cloud/latest/tasks/deploy/sockshop-deploy/
It says
Display the Sock Shop in the browser using:
<master-node-IP>:<NodePort>
But on GCP master node is hidden from the user.
So I changed the type from NodePort to LoadBalancer.
And I was able to get an external IP.
But it says the page cannot be found. enter code here
Do I need to set up more stuff for LoadBalancer?
I dont know If you solve the issue but I did it so I would like to share with you my solution that works for me.
You can do it through two ways:
1st) By creating a Load Balancer, where you expose the front-end service.
I assume that you have already created a namespace called sock-shop so any further command should specify and referred to that namespace.
If you type and execute the command:
kubectl get services --namespace=sock-shop
you should be able to see a list with all the services included a service called "front-end". So now you want to expose that service not as NodePort but as LoadBalancer. So, execute the command:
kubectl expose service front-end --name=front-end-lb --port=80 --target-port=8079 --type=LoadBalancer --namespace=sock-shop
After this give some time and you will able to access the Front end of the Sock Shop via public IP address (ephimeral)
2nd) More advanced way is by configuring an Ingress Load Balancer.
You need to configure another yaml file and put this code inside and run it as you did with the previous .yaml file.
nano basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace : sock-shop
name: basic-ingress
spec:
backend:
serviceName: front-end
servicePort: 80
kubectl apply -f basic-ingress.yaml --namespace=sock-shop
Locate the Public IP address through this command and after maximun 15mins you should be able to access the Sock Shop.
kubectl get ingress --namespace=sock-shop
I would recommend to return back for NodePort in the corresponded Service and create Ingress resource in your GCP cluster.
If you desire to access the related application from outside the cluster, Kubernetes provides Ingress mechanism to expose HTTP and HTTPS routes to your internal services.
Basically, HTTP(S) Load Balancer is created by default in GKE once Ingress resource has been implemented successfully, therefore it will take care for routing all the external HTTP/S traffic to the nested Kubernetes services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
You can check the External IP address for Load Balancer by the following command:
kubectl get ingress basic-ingress
I found this Article would be very useful in your common research.
I am using the concourse helm build provided at https://github.com/kubernetes/charts/tree/master/stable/concourse to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use kubectl port-forward to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:
apiVersion: v1
kind: Service
metadata:
name: concourse
namespace: concourse-ci
spec:
ports:
- port: 8080
name: atc
nodePort: 31080
- port: 2222
name: tsa
nodePort: 31222
selector:
app: concourse-web
type: NodePort
This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for /api/v1/builds/1/events is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?
EDIT: It seems like the events network request normally responds with a text/event-stream data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.
After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the events request.
Also, you can expose your port through loadbalancer in kubernetes.
kubectl get deployments
kubectl expose deployment <web pod name> --port=80 --target-port=8080 --name=expoport --type=LoadBalancer
It will create a public IP for you, and you will be able to access concourse on port 80.
not sure since I'm also a newbie but... you can configure your chart by providing your own version of https://github.com/kubernetes/charts/blob/master/stable/concourse/values.yaml
helm install stable/concourse -f custom_values.yaml
there is a 'externalURL' param, maybe worth trying to set it to your URL
## URL used to reach any ATC from the outside world.
##
# externalURL:
In addition, ... if you are on GKE, .... you can use an internal loadbalancer, ... set it up in your values.yaml file
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
#type: ClusterIP
type: LoadBalancer
## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
# loadBalancerIP: 172.217.1.174
## Annotations to be added to the web service.
##
annotations:
# May be used in example for internal load balancing in GCP:
cloud.google.com/load-balancer-type: Internal