We have an application that get created using helm. Every time we do release it creates a service with release name in it. How do we handle in alb-ingress if the service keeps changing ?
ex: for alb ingress(under kops) I have below rule
- host: pluto.example.com
paths:
- path: /
backend:
serviceName: pluto-service
servicePort: 8080
With a different helm release pluto-service will have new name. How to handle the ingress ?
You can also try to use '--reuse-values' flag with helm upgrade command. This will reuse the last release`s values.
Is the ingress declared with helm too ?
If so, and if the service use a {{ .Release.Name }}-service as name, you can also use {{ .Release.Name }}-service as ingress' service name. You can also write you own tpl function (and add it to _helpers.tpl file) to determine service name.
If not, maybe you should ...
You can create a service in helm where you pass a different value to the name of the service, most likely you use a release name right now. For example, create a helm chart for your application where you pass the name as a value:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.nameOverride }}
spec:
type: NodePort
ports:
- name: http-service
targetPort: 5000
protocol: TCP
port: 80
selector:
app: <MyApp>
And in the values.yaml of the chart you can specify the name of your service: nameOverride: MyService
Related
We are using argo cd and kubernetes.
And I want to use environmental variables in the yaml file.
For example,
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: $PORT
targetPort: $TARGET_PORT
selector:
app: guestbook-ui
I want to set the value of the environmental variable (PORT and TARGET_PORT) when deploying it to Argo CD.
What should I do?
I'd recommend converting your raw YAML to a Helm chart and templating the relevant fields.
Argo CD has an example Helm app with a service similar to yours.
You could define a service like this:
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
selector:
app: guestbook-ui
And then define your port and targetPort parameters in Argo CD.
I deployed mongodb in a Kubernetes cluster with this helm chart : https://github.com/helm/charts/tree/master/stable/mongodb. All is right. I can connect to mongo from within a replicatset container or from outside the cluster with a port-forward, or with a NodePort service. But I can't connect via an ingress.
When the ingress is deployed, I can curl mongodb and have this famous message : "It looks like you are trying to access MongoDB over HTTP on the native driver port.". But I can't connect with a mongo client, the connection stucks and I can see in mongodb logs that I never reach mongo.
Does someone have any information about accessing mongodb via an ingress object ? Maybe it's a protocol problem ?
The ingress manifests :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "mongodb.fullname" . }}
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: {{ .Values.ingress.hostName }}
http:
paths:
- path: /
backend:
serviceName: "{{ template "mongodb.fullname" $ }}"
servicePort: mongodb
tls:
- hosts:
- {{ .Values.ingress.hostName }}
secretName: secret
Thank you very much !
Ingress controllers are designed for HTTP connections, as the error hinted, the ingress is not the way to access mongodb.
None of the information in an ingress definition makes much sense for a plain TCP connection, host names and http URL paths don't apply to plain TCP connections.
Some ingress controllers (like nginx-ingress) can support plain TCP load balancers but not via an ingress definition. They use custom config maps.
Use a Service with type: loadBalancer if your hosting environment supports it or type: nodePort if not. There is an example in the stable mongodb helm chart and it's associated values.
apiVersion: v1
kind: Service
metadata:
name: {{ template "mongodb.fullname" . }}
labels:
app: {{ template "mongodb.name" . }}
spec:
type: loadBalancer
ports:
- name: mongodb
port: 27017
targetPort: mongodb
- name: metrics
port: 9216
targetPort: metrics
How can we expose a range of ports in Kubernetes?
My cloud application is using a range of ports when running(40000 ~42000).
How do I specify a range of exposed ports in Kubernetes service yaml file?
Kubernetes services currently do not support port ranges, see https://github.com/kubernetes/kubernetes/issues/23864
Update: As of 2021 there is a kubernetes enhancement proposal field for this requirement: https://github.com/kubernetes/enhancements/pull/2611
One option is to use go-templates to generate the yaml for all of the ports, then deploy that yaml, then use gomplate to compile the yaml
lotsofports.yaml
apiVersion: v1
kind: Service
metadata:
namespace: some-service
labels:
name: some-service
spec:
ports:
{{- range seq 0 2000}}
- port: {{add 40000 .}}
targetPort: {{add 40000 .}}
name: exposed-port-{{add 40000 .}}
{{- end -}}
selector:
name: some-service
sessionAffinity: None
type: ClusterIP
Then compile the yaml:
gomplate -f lotsofports.yaml > service.yaml
As #Thomas pointed is not supported yet.
However, as workaround you can try to use Helm templates.
Create chart with service template and ports in values.yaml file.
I am trying to build a kubernetes environment from scratch using Google's Deployment Manager and Kubernetes Engine. So far, the cluster is configured to host two apps. Each app is served by an exclusive service, which in turn receives traffic from an exclusive ingress. Both ingresses are created wit the same Deployment Manager jinja template:
- name: {{ NAME_PREFIX }}-ingress
type: {{ CLUSTER_TYPE_BETA }}:{{ INGRESS_COLLECTION }}
metadata:
dependsOn:
- {{ properties['cluster-type-v1beta1-extensions'] }}
properties:
apiVersion: extensions/v1beta1
kind: Ingress
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ NAME_PREFIX }}
labels:
app: {{ env['name'] }}
deployment: {{ env['deployment'] }}
spec:
rules:
- host: {{ properties['host'] }}
http:
paths:
- backend:
serviceName: {{ NAME_PREFIX }}-svc
servicePort: {{ properties['node-port'] }}
The environment deployment works fine. However, I was hoping that both ingresses would be bound to the same external address, which is not happening. How could I setup the template so that this restriction is enforced? More generally, is it considered a kubernetes bad practice to spawn one ingress for each one of the environment's host-based rules?
Each ingress will create its own HTTP(s) load balancer. If you want a single IP, define a single ingress with multiple host paths, one for each service
So i'm deploying my application stack on kubernetes sing helm charts and now i need to add some dependant server ip's and hostnames inside my pods /etc/hosts file so need help on this scenario
A helm templated solution to the original question. I tested this with helm 3.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- with .Values.hostAliases }}
hostAliases:
{{ toYaml . | indent 8 }}
{{- end }}
For values such as:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "host.domain.com"
If the hostAliases is omitted or commented out in the values, the hostAliases section is omitted when the template is rendered.
As standing in documentation you can add extra hosts to POD by using host aliases feature
Example from docs:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
Kubernetes provides a DNS service that all pods get to use. In turn, you can define an ExternalName service that just defines a DNS record. Once you do that, your pods can talk to that service the same way they'd talk to any other Kubernetes service, and reach whatever server.
You could deploy a set of ExternalName services globally. You could do it in a Helm chart too, if you wanted, something like
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-foo
spec:
type: ExternalName
externalName: {{ .Values.fooHostname }}
The practice I've learned is that you should avoid using /etc/hosts if at all possible.