I am trying to create k8s service of type load balancer using range loop in helm.I need to create k8s service pointing to dedicated POD.I have deployed 3 pods, up and running.I am trying to create 3 services pointing to 3 different pods.
{{- $replicas := .Values.replicaCount | int -}}
{{- $namespace := .Release.Namespace }}
{{- range $i,$e := until $replicas }}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-{{ $i }}
name: service-{{ $i }}
namespace: {{ $namespace }}
spec:
ports:
- protocol: TCP
targetPort: 2000
port: {{ . | printf ".Value.ports.port_%d" | int }}
selector:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-{{ $i }}
type: LoadBalancer
{{- end }}
my values.yaml
ports:
port_1: 30001
port_2: 30002
port_3: 30003
replicaCount: 3
dry-run is giving below put:
# Source: t1/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-0
name: service-0
namespace: xyz
spec:
ports:
- protocol: TCP
targetPort: 2000
port: 0
selector:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-0
type: LoadBalancer
---
# Source: t1/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-1
name: service-1
namespace: xyz
spec:
ports:
- protocol: TCP
targetPort: 2000
port: 0
selector:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-1
type: LoadBalancer
---
# Source: t1/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-2
name: service-2
namespace: xyz
spec:
ports:
- protocol: TCP
targetPort: 2000
port: 0
selector:
app: abc-svc
statefulset.kubernetes.io/pod-name: abc-2
type: LoadBalancer
I need port number pointing to correct port in according to values.yaml file.For service-0,service-1,service-2 I need 30001,30002,30002 ports assigned.Please suggest.Thank you !!
To do the dynamic lookup in the port list, you need to use the index function. This is part of the standard Go text/template language and not a Helm extension:
port: {{ index .Values.ports (add $i 1 | printf "port_%d") }}
This could be slightly simpler if you change the list of ports to be a list and not a map:
# values.yaml
ports:
- 30001
- 30002
- 30003
replicaCount: 3
port: {{ index .Values.ports $i }}
If you didn't require access to specific pods from outside the cluster, a StatefulSet creates a cluster-internal DNS name for each pod on its own, and you could avoid this loop entirely.
Related
I wanted to use one deployment file and value file to create charts for multiple services.
My value file has the values of all the service, that has to be used one deployment file.
below is my deployment file content
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.PA.name }}-deployment
labels:
app: {{ .Values.PA.name }}
spec:
replicas: {{ .Values.PA.replicas }}
selector:
matchLabels:
app: {{ .Values.PA.name }}
template:
metadata:
labels:
app: {{ .Values.PA.name }}
spec:
containers:
- name: {{ .Values.PA.name }}
image: {{ .Values.PA.image }}:{{ .Values.PA.tag }}
ports:
- containerPort: {{ .Values.PA.port }}
Below is my values file
PA:
name: povisioning_adapter
replicas: 1
env: dev
image: provisioning_adapter
tag: master
port: 8001
service:
protocol: TCP
port: 8001
targetPort: 8001
nodePort: 30100
SA:
name: service_adapter
replicas: 1
env: dev
image: service_adapter
tag: master
port: 8002
service:
protocol: TCP
port: 8002
targetPort: 8002
nodePort: 30200
Now I want to iterate through PA, SA values, etc. inside my deployment file.
How to declare list [PA,SA,..] and for loop through it inside deployment file?
You can wrap this in a range loop:
{{- range list .Values.PA .Values.SA -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .name }}-deployment
...
{{ end -}}
If you need to refer to the top-level .Values, in this setup you'd need to "escape" the range loop's scoping by explicitly referring to the top-level value $. You also might need this in a template parameter.
metadata:
labels:
name: {{ .name }}
{{ include "myapp.labels" $ | indent 4 }}
{{/* ^ */}}
You could do something similar breaking this out into a helper template that produced one of the Kubernetes objects. You may be able to restructure this to use the name of the component rather than its specific settings; where you currently have .Values.PA.name, if you have the top-level $.Values object and you know the name, then index $.Values "PA" "name" is equivalent, and any of those parts can be replaced by variables.
I have an application with Pods that are not part of a deployments and I use services like nodePorts I access my application through ipv4:nodePorts/url-microservice when I want to scale my pods do I need to have a deployment with replicas?
I tried using deployment with nodePorts but it doesn't work this way anymore: ipv4:nodePorts/url-microservice
I'll post my deployments and service for someone to see if I'm wrong somewhere
Deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
replicas: 1
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: rafaelribeirosouza86/shopping:api-gateway
imagePullPolicy: Always
ports:
- containerPort: 31534
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
replicas: 1
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: rafaelribeirosouza86/shopping:my-adm-contact
imagePullPolicy: Always
ports:
- containerPort: 30001
protocol: TCP
imagePullSecrets:
- name: regcred
Services:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
namespace: default
spec:
# clusterIP: 10.99.233.224
ports:
- port: 30001
protocol: TCP
targetPort: 30001
nodePort: 30001
# externalTrafficPolicy: Local
selector:
app: my-adm-contact
# type: ClusterIP
# type: LoadBalancer
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 31534
protocol: TCP
targetPort: 31534
nodePort: 31534
# externalTrafficPolicy: Local
selector:
app: my-gateway
# type: ClusterIP
# type: LoadBalancer
type: NodePort
Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
...
spec:
...
template:
metadata:
labels:
run: my-gateway # <-- take note
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
...
spec:
...
template:
metadata:
labels:
run: my-adm-contact # <-- take note
...
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
...
selector:
run: my-adm-contact # <-- wrong selector, changed from 'app' to 'run'
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
...
selector:
run: my-gateway # <-- wrong selector, changed from 'app' to 'run'
I have a local minishift cluster and i configured a simple web app with a service for it.
The service seems to be connected with the pod and send traffic, but when i try to create a route to expose the app, it fails with the error above. I tried many different solutions but nothings seems to work.
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 8080
Here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
name: automationportal-service
labels:
{{- include "automation-portal.labels" . | nindent 4 }}
spec:
type: clusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: hello-openshift
route.yaml:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: automationportal-route
labels:
annotations:
spec:
host:
port:
targetPort: http
to:
kind: Service
name: automationportal-service
Thanos requires a targetPort defined as a string in the Service for the ServiceMonitor to communicate with it.
However, just defining the targetPort as a string causes problems. I believe that something more is needed in the Deployment. I think the targetPort 'web' must be defined in the Deployment.
Can anyone assist with how the Deployment should look?
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
tenant: a
servicemonitor: my-servicemonitor
monitor: "true"
spec:
type: ClusterIP
ports:
- name: web
port: 80
protocol: TCP
targetPort: web
selector:
app: my-app
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: my-app
tenant: a
name: my-monitor
spec:
endpoints:
- port: web
path: /metrics
namespaceSelector:
matchNames:
- my-namespace
selector:
matchLabels:
servicemonitor: my-servicemonitor
monitor: "true"
A simple example of defining the targetPort as string is to first define it in the Deployment before you can refer to it as a string in targetPort in a service. Below is the simple example to show how to map "http" ( port name) from deployment in a service targetPort spec.
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
selector:
matchLabels:
app: hello
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: hello
tier: backend
track: stable
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-go-gke:1.0"
ports:
- name: http
containerPort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: http
You can set targetPort to an integer value or a name.
If you refer to it by name, that name has to be defined within the pod(s) in spec > containers[n] > ports[n] > name
If you refer by integer, there is no need to define ports in pods at all, although it's reasonable to still do it for clarity.
By providing a matching port name in the Deployment, things appear to work properly:
ports:
- name: web
containerPort: 80
My microservice has multiple containers, each of which needs access to a different port. How do I expose this service on multiple ports using the Hasura CLI and project configuration files?
Edit: Adding the microservice's k8s.yaml (as requested by #iamnat)
Let's say I have two containers, containerA and containerB, that I want to expose over HTTP on ports 6379 and 8000 respectively.
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: www
hasuraService: custom
name: www
namespace: '{{ cluster.metadata.namespaces.user }}'
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: www
spec:
containers:
- name: containerA
image: imageA
ports:
- containerPort: 6379
- name: containerB
image: imageB
ports:
- containerPort: 8000
securityContext: {}
terminationGracePeriodSeconds: 0
status: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: www
hasuraService: custom
name: www
namespace: '{{ cluster.metadata.namespaces.user }}'
spec:
ports:
- port: 6379
name: containerA
protocol: HTTP
targetPort: 6379
- port: 8000
name: containerB
protocol: HTTP
targetPort: 8000
selector:
app: www
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata: {}
TL;DR:
- Add an API gateway route for each HTTP endpoint you want to expose [docs]
Inside the kubernetes cluster, give your k8s spec this is what your setup will look like:
http://www.default:8000 -> containerA
http://www.default:6379 -> containerB
So you need to create a route for each of those HTTP paths in conf/routes.yaml.
www-a:
/:
upstreamService:
name: www
namespace: {{ cluster.metadata.namespaces.user }}
upstreamServicePath: /
upstreamServicePort: 8000
corsPolicy: allow_all
www-b:
/:
upstreamService:
name: www
namespace: {{ cluster.metadata.namespaces.user }}
upstreamServicePath: /
upstreamServicePort: 6379
corsPolicy: allow_all
This means that, you'll get the following:
https://www-a.domain.com -> containerA
https://www-a.domain.com -> containerB