How can I access Concourse built with helm outside of the cluster? - kubernetes

I am using the concourse helm build provided at https://github.com/kubernetes/charts/tree/master/stable/concourse to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use kubectl port-forward to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:
apiVersion: v1
kind: Service
metadata:
name: concourse
namespace: concourse-ci
spec:
ports:
- port: 8080
name: atc
nodePort: 31080
- port: 2222
name: tsa
nodePort: 31222
selector:
app: concourse-web
type: NodePort
This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for /api/v1/builds/1/events is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?
EDIT: It seems like the events network request normally responds with a text/event-stream data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.

After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the events request.

Also, you can expose your port through loadbalancer in kubernetes.
kubectl get deployments
kubectl expose deployment <web pod name> --port=80 --target-port=8080 --name=expoport --type=LoadBalancer
It will create a public IP for you, and you will be able to access concourse on port 80.

not sure since I'm also a newbie but... you can configure your chart by providing your own version of https://github.com/kubernetes/charts/blob/master/stable/concourse/values.yaml
helm install stable/concourse -f custom_values.yaml
there is a 'externalURL' param, maybe worth trying to set it to your URL
## URL used to reach any ATC from the outside world.
##
# externalURL:

In addition, ... if you are on GKE, .... you can use an internal loadbalancer, ... set it up in your values.yaml file
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
#type: ClusterIP
type: LoadBalancer
## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
# loadBalancerIP: 172.217.1.174
## Annotations to be added to the web service.
##
annotations:
# May be used in example for internal load balancing in GCP:
cloud.google.com/load-balancer-type: Internal

Related

Kubernetes - Expose Website using nginx-ingress

I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.

Can I guarantee the "kubernetes" Service will retain a consistent ClusterIP following cluster creation even if I attempt to modify or recreate it?

A few of our Pods access the Kubernetes API via the "kubernetes" Service. We're in the process of applying Network Policies which allow access to the K8S API, but the only way we've found to accomplish this is to query for the "kubernetes" Service's ClusterIP, and include it as an ipBlock within an egress rule within the Network Policy.
Specifically, this value:
kubectl get services kubernetes --namespace default -o jsonpath='{.spec.clusterIP}'
Is it possible for the "kubernetes" Service ClusterIP to change to a value other than what it was initialized with during cluster creation? If so, there's a possibility our configuration will break. Our hope is that it's not possible, but we're hunting for official supporting documentation.
The short answer is no.
More details :
You cannot change/edit clusterIP because it's immutable... so kubectl edit will not work for this field.
The service cluster IP can be changed easly by kubectl delete -f svc.yaml, then kubectl apply -f svc.yaml again.
Hence, never ever relies on service IP because services are designed to be referred by DNS :
Use service-name if the communicator is inside the same namespace
Use service-name.service-namespace if the communicator is inside or outside the same namespace.
Use service-name.service-namespace.svc.cluster.local for FQDN.
yes that is possible
if specify clusterIP in your service yaml file(Service.spec.clusterIP), the ip address of your service will not be random and always will be same. service yaml should be like this:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: 10.96.0.100
ports:
- name: https
port: 443
protocol: TCP
targetPort: 80
type: ClusterIP
be careful ip you choose should be unassigned in your cluster.

how to give service name and port in configmap yaml?

I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks
Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!

GKE/Istio: outside world cannot connect to service in private cluster

I've created a private GKE cluster with Istio through the Cloud Console UI. The cluster is set up with VPC Peering to be able to reach another private GKE cluster in another Google Cloud Project.
I've created a Deployment (called website) with a Service in Kubernetes in the staging namespace. My goal is to expose this service to the outside world with Istio, using the Envoy proxy. I've created the necessary VirtualService and Gateway to do so, following this guide.
When running "kubectl exec ..." to access a pod in the private cluster, I can successfully connect to the internal IP address of the website service, and see the output of that service with "curl".
I have set up a NAT Gateway so pods in the private cluster can connect to the Internet. I confirmed this by curl-ing various non-Google web pages from within the website pod.
However, I can't connect to the website service from the outside, using the External IP of the istio-ingressgateway service, as the guide above mentions. Instead, curl-ing that External IP leads to a timeout.
I've put the full YAML config for all related resources in a private Gist, here: https://gist.github.com/marceldegraaf/0f36ca817a8dba45ac97bf6b310ca282
I'm wondering if I'm missing something in my config here, or if my use case is actually impossible?
Looking at your Gist I suspect the problem lies in the joining up of the Gateway to the istio-ingressgateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: website-gateway
namespace: staging
labels:
version: v1
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
In particular I'm not convinced the selector part is correct.
You should be able to do something like
kubectl describe po -n istio-system istio-ingressgateway-rrrrrr-pppp
to find out what the selector is trying to match in the Istio Ingress Gateway pod.
I had the same problem. On my case, the istio virtual service dont find my service.
Try this on your VirtualService:
route:
- destination:
host: website
port:
number: 80
From verifying all options, the only way to have the private GKE cluster with Istio to be exposed to externally is to use Cloud NAT.
Since the Master node within GKE is a managed service, there are current limits when using Istio with a private cluster. The only workaround that would accomplish your use case is to use Cloud NAT. I have also attached an article on how to get started using Cloud NAT here.

NodePort service is not externally accessible via `port` number

I have following service configuration:
kind: Service
apiVersion: v1
metadata:
name: web-srv
spec:
type: NodePort
selector:
app: userapp
tier: web
ports:
- protocol: TCP
port: 8090
targetPort: 80
nodePort: 31000
and an nginx container is behind this service. Although I can access to the service via nodePort, service is not accessible via port field. I'm able to see the configs with kubectl and Kubernetes dashboard but curling to that port (e.g. curl http://192.168.0.100:8090) raises a Connection Refused error.
I'm not sure what is the problem here. Do I need to make sure any proxy services is running inside the Node or Container?
Get the IP of the kubernetes service and then hit 8090; it will work.
nodePort implies that the service is bound to the node at port 31000.
These are the 3 things that will work:
curl <node-ip>:<node-port> # curl <node-ip>:31000
curl <service-ip>:<service-port> # curl <svc-ip>:8090
curl <pod-ip>:<target-port> # curl <pod-ip>:80
So now, let's look at 3 situations:
1. You are inside the kubernetes cluster (you are a pod)
<service-ip> and <pod-ip> and <node-ip> will work.
2. You are on the node
<service-ip> and <pod-ip> and <node-ip> will work.
3. You are outside the node
Only <node-ip> will work assuming that <node-ip> is reachable.
The behavior is as expected since I assume you are trying to access the service from outside the cluster. That means only the nodePort exposes the service to the world outside the cluster. The port refers to the port on the pod, as exposed by the container inside the pod. This is generally desired behavior as to support clusters of services that are represented by a loadbalancer typically. So the load balancer will expose the port you want for your service (e.g. load-balancer:80) and forward to the nodePort on all nodes as to distribute the load.
If you accessing the service from inside the cluster you should be able to reach it via service-name:service-port thanks to the built in DNS.
More detailed information can be found at the docs.