As per: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/
I'm trying to install ingress-nginx with custom ports, but it does not expose those ports when I pass in the controller.customPorts parameter. I think I'm not passing it in the right format. The documentation says
A list of custom ports to expose on the NGINX ingress controller pod. Follows the conventional Kubernetes yaml syntax for container ports.
Can anyone explain to me what that format should be?
Assuming they mean what shows up in Pod definitions:
- port: 1234
name: alan
Related
Application A and application B are two applications running in the same kubernetes cluster. Application A can access B by reading the B_HOST env ( with value b.example.com) passed to A's container. Is there any way by which an A would be able access B:
internally: using the DNS name of B's service (b.default.svc.cluster.local)
externally: using the FQDN of B, that is also defined in the ingress resource (b.example.com)
at the same time?
For example,
If you try to curl b.example.com inside the pod/container of A, it should resolve to b.default.svc.cluster.local and get the result via that service.
If you try to curl b.example.com outside the k8s cluster, it should use ingress to reach the service B and get the results.
As a concept, adding an extra host entry (that maps B's FQDN to its service IP) to the container A's /etc/hosts should work. But that doesn't seem to be a good practice as it needs to get the IP address of B's service in advance and then create A's pod with that HostAliases config. Patching this field into an existing pod is not allowed. The service IP changes when you recreate the service, and adding the dns name of the service instead of its IP in HostAliases is also not supported.
So, what would be a good method to achieve this?
Found a similar discussion in this thread.
Additional Info:
I'm using Azure Kubernetes service (AKS) and using application gateway as ingress controller (AGIC).
You can try different methods, then see which one works for you.
Method 1 :
Modifying the coreDNS configuration of your k8s cluster.
Reference: https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
In AKS, it can be done as described here:
https://learn.microsoft.com/en-us/azure/aks/coredns-custom#rewrite-dns
Method 2 :
Specifying an externalIP manually for the service B and then adding the same IP in /etc/hosts file of pod A using hostAliases seems working.
Part of pod definition of app A:
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
spec:
hostAliases:
- ip: "10.0.3.165"
hostnames:
- "b.example.com"
Part of service definition of app B:
apiVersion: v1
kind: Service
metadata:
name: b
spec:
selector:
app: b
externalIPs:
- 10.0.3.165
ports:
- protocol: TCP
port: 80
targetPort: 80
But not sure if that is a good practice; there could be pitfalls.
One being that the externalIP we are defining could be any random valid IP address - be it private or public, without a conflict to other IPs of cluster resources.Unpredictable behaviour can result if overlapping IP ranges are used.
Method 3 :
The clusterIP of the service will be available inside pod A as an environment variable B_SERVICE_HOST by default.
So, instead of adding an externalIP you can try to get the actual service IP (clusterIP) of B from env B_SERVICE_HOST and add to /etc/hosts of the pod A - either using hostAliases or directly, whichever works.
echo $B_SERVICE_HOST 'b.example.com' >> /etc/hosts
You can do this using a postStart hook for the container in the pod definition:
containers:
- image: "myreg/myimagea:tag"
name: container-a
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo $B_SERVICE_HOST 'b.example.com' >> /etc/hosts"]
Since this is a container lifecycle hook, the changes will be specific to that one container. So other containers in the same pod may not have the same changes applied to their hosts file.
Also note that, service of B should be created before the pod A in order to be able to get IP from B_SERVICE_HOST env.
Method 4 :
You can try to create a public DNS zone and a private DNS zone in your cloud tenant. Then add records in it to point to ther service. For example, create a private DNS zone in Azure then do either of the following 2 methods:
Add A record mapping b.example.com to svc B's clusterIP
Add CNAME record mapping b.example.com to internal loadbalancer dns label provided by azure for the service. On a wider perspective, if you have multiple applications in the cluster with same reequirement, Create a static IP, create a loadbalancer type service for your ingress controller using this static IP as loadBalancerIP and with an annotation service.beta.kubernetes.io/azure-dns-label-name as described here. You'll get a dns label for that service. Then add a CNAME record in your private zone with mapping *.example.com to this azure-provided dns label. Still I doubt if this would be suitable if your ingress controller is Azure application gateway.
NOTE:
Also consider how the method you adopt will affect your debugging process in future if any networking related issue arises.
If you feel that would be problem, consider using two different environment variables B_HOST and B_PUBLIC_HOST separately for external and internal access.
I have a working Nexus 3 pod, reachable on port 30080 (with NodePort): http://nexus.mydomain:30080/ works perfectly from all hosts (from the cluster or outside).
Now I'm trying to make it accessible at the port 80 (for obvious reasons).
Following the docs, I've implemented it like that (trivial):
[...]
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nexus.mydomain
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: nexus-service
servicePort: 80
Applying it works without errors. But when I try to reach http://nexus.mydomain, I get:
Service Unavailable
No logs are shown (the webapp is not hit).
What did I miss ?
K3s Lightweight Kubernetes
K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.
As I mentioned in comments, K3s as default is using Traefik Ingress Controller.
Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
This information can be found in K3s Rancher Documentation.
Traefik is deployed by default when starting the server... To prevent k3s from using or overwriting the modified version, deploy k3s with --no-deploy traefik and store the modified copy in the k3s/server/manifests directory. For more information, refer to the official Traefik for Helm Configuration Parameters.
To disable it, start each server with the --disable traefik option.
If you want to deploy Nginx Ingress controller, you can check guide How to use NGINX ingress controller in K3s.
As you are using specific Nginx Ingress like nginx.ingress.kubernetes.io/rewrite-target: /$1, you have to use Nginx Ingress.
If you would use more than 2 Ingress controllers you will need to force using nginx ingress by annotation.
annotations:
kubernetes.io/ingress.class: "nginx"
If mention information won't help, please provide more details like your Deployment, Service.
I do not think you can expose it on port 80 or 443 over a NodePort service or at least it is not recommended.
In this configuration, the NGINX container remains isolated from the
host network. As a result, it can safely bind to any port, including
the standard HTTP ports 80 and 443. However, due to the container
namespace isolation, a client located outside the cluster network
(e.g. on the public internet) is not able to access Ingress hosts
directly on ports 80 and 443. Instead, the external client must append
the NodePort allocated to the ingress-nginx Service to HTTP requests.
-- Bare-metal considerations - NGINX Ingress Controller
* Emphasis added by me.
While it may sound tempting to reconfigure the NodePort range using
the --service-node-port-range API server flag to include unprivileged
ports and be able to expose ports 80 and 443, doing so may result in
unexpected issues including (but not limited to) the use of ports
otherwise reserved to system daemons and the necessity to grant
kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches
proposed in this page for alternatives.
-- Bare-metal considerations - NGINX Ingress Controller
I did a similar setup a couple of months ago. I installed a MetalLB load balancer and then exposed the service. Depending on your provider (e.g., GKE), a load balancer can even be automatically spun up. So possibly you don't even have to deal with MetalLB, although MetalLB is not hard to setup and works great.
I've gotten up Traefik as an Ingress in Kubernetes with this configuration: https://github.com/RedxLus/traefik-simple-kubernetes/tree/master/V1.7
And works well to HTTP and HTTPS but I don't know how can open others ports to forward, for example, a Pod with an Ingress with MySQL in port 3306
Thanks for every answer!
Traefik doesn't support it if you are using an Ingress resource and that resource doesn't support L4 type of traffic like mentioned in the other answer.
But if you are using an Nginx ingress controller there is a workaround, use a ConfigMap with the ingress controller options --tcp-services-configmap and --udp-services-configmap as described here. Then your tcp-services ConfigMap would look something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
9000: "default/example-go:8080"
The advantage of this is having a single entry point to your cluster (this applies to any ingress that would be used for TCP/UDP) but the downside is overhead of having an extra layer compared to just simply having a Kubernetes Service (NodePort or LoadBalancer) that already listens on TCP/UDP ports.
Kubernetes Ingress API does not support it. But it is possible to use Traefik as TCP proxy for your desired use-case, but only, if you make use of TLS encrypted connections. Otherwise, based on the level 4 protocol, it's not possible to distinguish between the different hostnames and you would have to use one entrypoint per TCP router. Check this issue in GitHub.
I am trying to deploy and access Sock-shop on Google Cloud Platform.
https://github.com/microservices-demo/microservices-demo
I was able to deploy it using the deployment script
https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml
Based on the tutorial here
https://www.weave.works/docs/cloud/latest/tasks/deploy/sockshop-deploy/
It says
Display the Sock Shop in the browser using:
<master-node-IP>:<NodePort>
But on GCP master node is hidden from the user.
So I changed the type from NodePort to LoadBalancer.
And I was able to get an external IP.
But it says the page cannot be found. enter code here
Do I need to set up more stuff for LoadBalancer?
I dont know If you solve the issue but I did it so I would like to share with you my solution that works for me.
You can do it through two ways:
1st) By creating a Load Balancer, where you expose the front-end service.
I assume that you have already created a namespace called sock-shop so any further command should specify and referred to that namespace.
If you type and execute the command:
kubectl get services --namespace=sock-shop
you should be able to see a list with all the services included a service called "front-end". So now you want to expose that service not as NodePort but as LoadBalancer. So, execute the command:
kubectl expose service front-end --name=front-end-lb --port=80 --target-port=8079 --type=LoadBalancer --namespace=sock-shop
After this give some time and you will able to access the Front end of the Sock Shop via public IP address (ephimeral)
2nd) More advanced way is by configuring an Ingress Load Balancer.
You need to configure another yaml file and put this code inside and run it as you did with the previous .yaml file.
nano basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace : sock-shop
name: basic-ingress
spec:
backend:
serviceName: front-end
servicePort: 80
kubectl apply -f basic-ingress.yaml --namespace=sock-shop
Locate the Public IP address through this command and after maximun 15mins you should be able to access the Sock Shop.
kubectl get ingress --namespace=sock-shop
I would recommend to return back for NodePort in the corresponded Service and create Ingress resource in your GCP cluster.
If you desire to access the related application from outside the cluster, Kubernetes provides Ingress mechanism to expose HTTP and HTTPS routes to your internal services.
Basically, HTTP(S) Load Balancer is created by default in GKE once Ingress resource has been implemented successfully, therefore it will take care for routing all the external HTTP/S traffic to the nested Kubernetes services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
You can check the External IP address for Load Balancer by the following command:
kubectl get ingress basic-ingress
I found this Article would be very useful in your common research.
I am using the concourse helm build provided at https://github.com/kubernetes/charts/tree/master/stable/concourse to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use kubectl port-forward to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:
apiVersion: v1
kind: Service
metadata:
name: concourse
namespace: concourse-ci
spec:
ports:
- port: 8080
name: atc
nodePort: 31080
- port: 2222
name: tsa
nodePort: 31222
selector:
app: concourse-web
type: NodePort
This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for /api/v1/builds/1/events is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?
EDIT: It seems like the events network request normally responds with a text/event-stream data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.
After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the events request.
Also, you can expose your port through loadbalancer in kubernetes.
kubectl get deployments
kubectl expose deployment <web pod name> --port=80 --target-port=8080 --name=expoport --type=LoadBalancer
It will create a public IP for you, and you will be able to access concourse on port 80.
not sure since I'm also a newbie but... you can configure your chart by providing your own version of https://github.com/kubernetes/charts/blob/master/stable/concourse/values.yaml
helm install stable/concourse -f custom_values.yaml
there is a 'externalURL' param, maybe worth trying to set it to your URL
## URL used to reach any ATC from the outside world.
##
# externalURL:
In addition, ... if you are on GKE, .... you can use an internal loadbalancer, ... set it up in your values.yaml file
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
#type: ClusterIP
type: LoadBalancer
## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
# loadBalancerIP: 172.217.1.174
## Annotations to be added to the web service.
##
annotations:
# May be used in example for internal load balancing in GCP:
cloud.google.com/load-balancer-type: Internal