I am following this image architecture from K8s
However I can not seem to connect to socket.io server from within the cluster using the service name
Current situation:
From POD B
Can connect directly to App A's pod using WS ( ws://10.10.10.1:3000 ) ✅
Can connect to App A's service using HTTP ( http://orders:8000 ) ✅
Can not connect to App A's service using WS ( ws://orders:8000 ) ❌
From outside world / Internet
Can connect to App A's service using WS ( ws://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
Can connect to App A's service using HTTP ( http://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
My current service configuration
spec:
ports:
- name: http
protocol: TCP
port: 8000
targetPort: 3000
selector:
app: orders
clusterIP: 172.20.115.234
type: ClusterIP
sessionAffinity: None
My Ingress Helm chart
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "app.name" $ }}-backend
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: "http://auth.default.svc.cluster.local:8000/api/v1/oauth2/auth"
ingress.kubernetes.io/auth-response-headers: authorization
labels:
{{- include "api-gw.labels" $ | indent 4 }}
spec:
rules:
- host: {{ .Values.deploy.host | quote }}
http:
paths:
- path: /socket/events
backend:
serviceName: orders
servicePort: 8000
My Service Helm chart
apiVersion: v1
kind: Service
metadata:
name: {{ template "app.name" . }}
spec:
{{ if not $isDebug -}}
selector:
app: {{ template "app.name" . }}
{{ end -}}
type: NodePort
ports:
- name: http
port: {{ template "app.svc.port" . }}
targetPort: {{ template "app.port" . }}
nodePort: {{ .Values.service.exposedPort }}
protocol: TCP
# Helpers..
# {{/* vim: set filetype=mustache: */}}
# {{- define "app.name" -}}
# {{ default "default" .Chart.Name }}
# {{- end -}}
# {{- define "app.port" -}}
# 3000
# {{- end -}}
# {{- define "app.svc.port" -}}
# 8000
# {{- end -}}
The services DNS name must be set in your container to access its VIP address.
Kubernetes automatically sets environmental variables in all pods which have the same selector as the service.
In your case, all pods with selector A, have environmental variables set in them when the container is deploying, that contain the services VIP and PORT.
The other pod with selector B, is not linked as an endpoints for the service, therefor, it does not contain the environmental variables needed to access the service.
Here is the k8s documentation related to your problem.
To solve this, you can setup a DNS service, which k8s offers as a cluster addon.
Just follow the documentation.
Related
I am creating 3 Istio service entries using HELM template. When I use range it only create the last one. Here is the value.yaml and service entry yaml. How do I create 3 service entries here?
serviceentry:
appdb01: APPDB01.domain.com
appdb02: APPDB02.domain.com
appdb03: APPDB03.domain.com
{{- range $key, $val := .Values.serviceentry }}
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: {{ $key }}
namespace: mytest
spec:
hosts:
- {{ $val | quote }}
location: MESH_EXTERNAL
ports:
- name: tcp1433
number: 1433
protocol: TCP
- name: udp1434
number: 1434
protocol: UDP
resolution: DNS
{{- end }}
Result:
Only the appdb03 is created
When running the HeLM template, it only creates the appdb03 but not the other 2.
You need to make sure you include a YAML start-of-document marker, --- on its own line, inside the range loop. This is true whenever you're producing multiple Kubernetes manifests from the same Helm template file; it's not specific to this Istio use case.
{{- range $key, $val := .Values.serviceentry }}
---
apiVersion: ...
{{- end }}
I'm reading the helm documentation about how to do some loops for Kubernetes, basically what i want to do is something like this.
What i have...
values.yaml
dnsAliases:
- test1
- test2
- test3
on services-external.yaml
{{- if and .Values.var1.var1parent (eq .Values.var2.var2parent "value") }}
{{- range .Values.dnsAliases }}
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }} ( for creating the name "name-test1/test2 and so on"
spec:
type: ExternalName
externalName: {{ .Values.var3.var3parent }}-{{ .Values.var4.var4parent }}-{{ .}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{ end }}
{{ end }}
but im having the error
Error: UPGRADE FAILED: render error in "services-external.yaml": template: templates/services-external.yaml:312:32: executing "services-external.yaml" at <.Values.var3.var3parent>: can't evaluate field Values in type interface {}
I tried also with "with" but same error. Is there some way to achieve it by using the "if" with a loop on helm?
Error you have shows that template can't find values for <.Values.var3.var3parent>. Since you're using range block, . refers to local variables within the loop. You need to refer to global variables. This can be achieved with two approaches:
Use $ before a variable you need to invoke (see it with var3)
Define a new variable and save values you need to this variable (see it with var4)
Here's a tested template with two approaches from the above:
{{- if and .Values.var1.var1parent (eq .Values.var2.var2parent "value") }}
{{- $var4 := .Values.var4 -}}
{{- range .Values.dnsAliases }}
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }} ( for creating the name "name-test1/test2 and so on"
spec:
type: ExternalName
externalName: {{ $.Values.var3.var3parent }}-{{ $var4.var4parent }}-{{ .}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{ end }}
{{ end }}
You can read about it more here
Also there is one more possible solution for this to reset the scope to root and work with loop as usually (but it's more sketchy approach) (here's a link)
Thanks #moonkotte i managed to make it work using the approach of defining a new variable to save the scope, here the example.
On values.yaml
dnsShortNames:
short1: "short1"
short2: "short2"
short3: "short3"
dnsAliases:
- test1
- test2
- test3
on services-external.yaml
{{- $dns_short_names := .Values.dnsShortNames }}
{{- range .Values.dnsAliases }}
---
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }}
spec:
type: ExternalName
externalName: {{ $dns_short_names.short1 }}-{{ $dns_short_names.short2 }}-{{ $dns_short_names.short3 }}{{.}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{- end }}
Applying this Kubernetes will create 3 different external services.
short1-short2-short3.test1.svc.cluster.local
short1-short2-short3.test2.svc.cluster.local
short1-short2-short3.test3.svc.cluster.local
Public thanks to my friend Xavi <3.
I'm trying to set up a connection to a kubernetes cluster and I'm getting a 502 bad gateway error.
The cluster has an nginx ingress and a service (listening at both http and https). In addition the ingress is behind an nginx ingress service (I have nginx helm chart installed) with a static IP address.
I can see in the description of the cluster ingress that it knows the service's endpoints.
I see that the pods communicate successfully with each other (there are 3 pods), but I can't ping the external nginx from within a shell.
These are the cluster's ingress values in values.yaml:
ingress:
# If `true`, an Ingress is created
enabled: true
# The Service port targeted by the Ingress
servicePort: http
# Ingress annotations
annotations:
kubernetes.io/ingress.class: "nginx"
# Additional Ingress labels
labels: {}
# List of rules for the Ingress
rules:
-
# Ingress host
host: my-app.com
# Paths for the host
paths:
- /
# TLS configuration
tls:
- hosts:
- my-app.com
secretName: my-app-tls
When I go to my-app.com I see in the browser that I'm on a secure connection (the lock icon next to the URL), but like I said I get a 502 bad gateway error. If I replace the servicePort from http to https I get the '400 bad request' error.
How should I set up both ingresses to allow a secured connection to my app?
I tried all sorts of annotations, but always got the errors above.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Thank you!
The ingress definition that you have shared doesn't ingress definition and its values file.
Ingress definition should look something like below
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "app.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
This gets executed if your values has ingress enabled=true.
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
The missing annotation was nginx.org/ssl-services, which accepts the list of secure services.
I want to deploy a gRPC service to Azure Kubernetes Service. I have already depoyed RESTful services using Helm charts but gRPC service is throwing "connection timed out" error.
I have already tried everything said in the NGINX and HELM documentation but nothing worked. The certificate is self signed. I have tried all permutation and combination of annotations :p
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
ports:
- port: 50051
protocol: TCP
targetPort: 50051
name: grpc
selector:
app: {{ template "fullname" . }}
type: NodePort
ingress.yaml
{{ if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/grpc-backend: "true"
nginx.org/grpc-services: {{ template "fullname" . }}
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
secretName: aks-ingress-tls
rules:
- http:
proto: h2
paths:
- backend:
serviceName: {{ template "fullname" . }}
servicePort: grpc
proto: h2
path: /{servicename}-grpc(/|$)(.*)
{{ end }}
Tried this also- still not working-
{{ if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- secretName: aks-ingress-tls
rules:
- http:
paths:
- backend:
serviceName: {{ template "fullname" . }}
servicePort: 50051
path: /servicename-grpc(/|$)(.*)
{{ end }}
It looks like you are missing an annotation on your ingress.
ingress.yaml - snippet
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# This annotation matters!
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
According to this snippet from the official Kubernetes nginx ingress documentation:
Backend Protocol
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP
By default NGINX uses HTTP.
Example:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
As an aside, there's a chance you might need to specify GRPCS instead of GRPC since it appears you are using SSL.
Another thing to call out is that the docs mention that this annotation replaces 'secure-backends' in older versions, which could be where you found the grpc-backend annotation you are currently using.
I am trying to build a kubernetes environment from scratch using Google's Deployment Manager and Kubernetes Engine. So far, the cluster is configured to host two apps. Each app is served by an exclusive service, which in turn receives traffic from an exclusive ingress. Both ingresses are created wit the same Deployment Manager jinja template:
- name: {{ NAME_PREFIX }}-ingress
type: {{ CLUSTER_TYPE_BETA }}:{{ INGRESS_COLLECTION }}
metadata:
dependsOn:
- {{ properties['cluster-type-v1beta1-extensions'] }}
properties:
apiVersion: extensions/v1beta1
kind: Ingress
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ NAME_PREFIX }}
labels:
app: {{ env['name'] }}
deployment: {{ env['deployment'] }}
spec:
rules:
- host: {{ properties['host'] }}
http:
paths:
- backend:
serviceName: {{ NAME_PREFIX }}-svc
servicePort: {{ properties['node-port'] }}
The environment deployment works fine. However, I was hoping that both ingresses would be bound to the same external address, which is not happening. How could I setup the template so that this restriction is enforced? More generally, is it considered a kubernetes bad practice to spawn one ingress for each one of the environment's host-based rules?
Each ingress will create its own HTTP(s) load balancer. If you want a single IP, define a single ingress with multiple host paths, one for each service