How to expose a range of ports in Kubernetes? - kubernetes

How can we expose a range of ports in Kubernetes?
My cloud application is using a range of ports when running(40000 ~42000).
How do I specify a range of exposed ports in Kubernetes service yaml file?

Kubernetes services currently do not support port ranges, see https://github.com/kubernetes/kubernetes/issues/23864
Update: As of 2021 there is a kubernetes enhancement proposal field for this requirement: https://github.com/kubernetes/enhancements/pull/2611

One option is to use go-templates to generate the yaml for all of the ports, then deploy that yaml, then use gomplate to compile the yaml
lotsofports.yaml
apiVersion: v1
kind: Service
metadata:
namespace: some-service
labels:
name: some-service
spec:
ports:
{{- range seq 0 2000}}
- port: {{add 40000 .}}
targetPort: {{add 40000 .}}
name: exposed-port-{{add 40000 .}}
{{- end -}}
selector:
name: some-service
sessionAffinity: None
type: ClusterIP
Then compile the yaml:
gomplate -f lotsofports.yaml > service.yaml

As #Thomas pointed is not supported yet.
However, as workaround you can try to use Helm templates.
Create chart with service template and ports in values.yaml file.

Related

Correct configuration of Prometheus ServiceMonitor to scrape RabbitMQ Metrics

I'm trying to make Prometheus find metrics from RabbitMQ (and a few other services, but the logic is the same).
My current configuration is:
# RabbitMQ Service
# This live in the `default` namespace
kind: Service
apiVersion: v1
metadata:
name: rabbit-metrics-service
labels:
name: rabbit-metrics-service
spec:
ports:
- protocol: TCP
port: 15692
targetPort: 15692
selector:
# This selects the deployment and it works
app: rabbitmq
I then created a ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
# The name of the service monitor
name: rabbit-monitor
# The namespace it will be in
namespace: kube-prometheus-stack
labels:
# How to find this service monitor
# The name I should use in `serviceMonitorSelector`
name: rabbit-monitor
spec:
endpoints:
- interval: 5s
port: metrics
path: /metrics
# The namespace of origin service
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
# Where the monitor will attach to
name: rabbit-metrics-service
kube-prometheus-stack has the following values.yml
# values.yml
prometheusSpec:
serviceMonitorSelector:
matchLabels:
name:
- rabbit-monitor
So from what I understand is: in the metadata/label section I define a labelKey/labelValue pair, and then reference this pair on the selector/matchLabels. I then add a custom serviceMonitorSelector that will match N labels. If it finds the labels, Prometheus should discover the ServiceMonitor, and hence, the metrics endpoint, and start scraping. But I guess there's something wrong with this logic. I tried a few other variations of this as well, but no success.
Any ideas on what I might be doing wrong?
Documentation usually uses the same name everywhere, so I can quite understand where exactly should that name come from, since I tend to prefer to add -service, -deployment suffixes to the resources to be able to easily identify them later. I already add RabbitMQ prometheus plugin, and the endpoint seems to be working fine.

How to use Argo CD to pass environment variables to deploy in kubernetes

We are using argo cd and kubernetes.
And I want to use environmental variables in the yaml file.
For example,
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: $PORT
targetPort: $TARGET_PORT
selector:
app: guestbook-ui
I want to set the value of the environmental variable (PORT and TARGET_PORT) when deploying it to Argo CD.
What should I do?
I'd recommend converting your raw YAML to a Helm chart and templating the relevant fields.
Argo CD has an example Helm app with a service similar to yours.
You could define a service like this:
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
selector:
app: guestbook-ui
And then define your port and targetPort parameters in Argo CD.

can't reach to my grafana dashboard on k8s cluster using ingress from browser

I've installed Prometheus and Grafana on my Kubernetes cluster using helm:
$helm install prometheus prometheus-community/kube-prometheus-stack
All the pods, deployments and services are up and running. When I use port-forwarding like this:
kubectl port-forward deployment/prometheus-grafana 3000
I can reach my grafana dashboard using browser but when I want to use ingress instead of port-forward the response is:
and I can't reach to Grafana dashboard.
My ingress yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: default
spec:
ingressClassName: kong
rules:
- http:
paths:
- path: /grafana/login
pathType: Prefix
backend:
service:
name: prometheus-grafana
port:
number: 80
and the prometheus-grafana service yaml file is :
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: default
creationTimestamp: "2021-09-15T11:07:30Z"
labels:
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 8.1.2
helm.sh/chart: grafana-6.16.4
name: prometheus-grafana
namespace: default
resourceVersion: "801373"
uid: e1f57de9-94d0-460a-a427-4a97fd770e12
spec:
clusterIP: 10.100.90.147
clusterIPs:
- 10.100.90.147
ports:
- name: service
port: 80
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: prometheus
app.kubernetes.io/name: grafana
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I have posted community wiki answer for better visibility. The problem is solved and it was related to ingress yaml file.
Solution:
I changed my ingress file : added host: grafana.example.com and changed path to / . Everything works smoothly.
The suggestion of the solution:
What I mean is, the Ingress defines only one path, /grafana/login with type Prefix. Surely Grafana will have other paths other than /grafana/login so first thing I'd try is to just use a single path, /grafana. When you use port forward, at which path can you open the grafana dashboard? Because Grafana will probably expects requests to arrive at that same path / paths.
Explanation:
Grafana is a web app and it is expecting to be served directly under the root path of the server. You need to expose it under / as a path, use rewrite target rules or serve it under a subdomain of your host. Try if, with path /, everything works as expected or not as a first thing

How to add extra hosts entries in helm charts

So i'm deploying my application stack on kubernetes sing helm charts and now i need to add some dependant server ip's and hostnames inside my pods /etc/hosts file so need help on this scenario
A helm templated solution to the original question. I tested this with helm 3.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- with .Values.hostAliases }}
hostAliases:
{{ toYaml . | indent 8 }}
{{- end }}
For values such as:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "host.domain.com"
If the hostAliases is omitted or commented out in the values, the hostAliases section is omitted when the template is rendered.
As standing in documentation you can add extra hosts to POD by using host aliases feature
Example from docs:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
Kubernetes provides a DNS service that all pods get to use. In turn, you can define an ExternalName service that just defines a DNS record. Once you do that, your pods can talk to that service the same way they'd talk to any other Kubernetes service, and reach whatever server.
You could deploy a set of ExternalName services globally. You could do it in a Helm chart too, if you wanted, something like
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-foo
spec:
type: ExternalName
externalName: {{ .Values.fooHostname }}
The practice I've learned is that you should avoid using /etc/hosts if at all possible.

helm with ingress controllers

We have an application that get created using helm. Every time we do release it creates a service with release name in it. How do we handle in alb-ingress if the service keeps changing ?
ex: for alb ingress(under kops) I have below rule
- host: pluto.example.com
paths:
- path: /
backend:
serviceName: pluto-service
servicePort: 8080
With a different helm release pluto-service will have new name. How to handle the ingress ?
You can also try to use '--reuse-values' flag with helm upgrade command. This will reuse the last release`s values.
Is the ingress declared with helm too ?
If so, and if the service use a {{ .Release.Name }}-service as name, you can also use {{ .Release.Name }}-service as ingress' service name. You can also write you own tpl function (and add it to _helpers.tpl file) to determine service name.
If not, maybe you should ...
You can create a service in helm where you pass a different value to the name of the service, most likely you use a release name right now. For example, create a helm chart for your application where you pass the name as a value:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.nameOverride }}
spec:
type: NodePort
ports:
- name: http-service
targetPort: 5000
protocol: TCP
port: 80
selector:
app: <MyApp>
And in the values.yaml of the chart you can specify the name of your service: nameOverride: MyService