Deploy to kubernetes - kubernetes

I want to deploy frontend and backend applications on kubernetes. I write yaml files(i get this from helm temlate):
# Source: quality-control/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-quality-control
labels:
app.kubernetes.io/name: quality-control
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
rules:
- host: "quality-control.ru"
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-quality-control
servicePort: http
---
# Source: quality-control/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: List
items:
- apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: quality-control-frontend
labels:
app.kubernetes.io/name: quality-control-frontend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
logger: external
sourcetype: quality-control-frontend
spec:
containers:
- name: quality-control
image: "registry.***.ru:5050/quality-control-frontend:stable"
imagePullPolicy: Always
env:
- name: spring_profiles_active
value: dev
ports:
- containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /healthcheck
port: 80
protocol: TCP
initialDelaySeconds: 10
periodSeconds: 10
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 1
memory: 1024Mi
- apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
logger: external
sourcetype: quality-control-backend
spec:
containers:
- name: quality-control
image: "registry.***.ru:5050/quality-control-backend:stable"
imagePullPolicy: Always
env:
- name: spring_profiles_active
value: dev
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 1
memory: 1024Mi
---
# Source: quality-control/templates/service.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: quality-control-frontend
labels:
app.kubernetes.io/name: quality-control-frontend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME}
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
But I get an error when deploying:
Error: release quality-control failed: Deployment.apps "quality-control-frontend" is invalid: [spec.selector: Required value, spec.template.metadata.la bels: Invalid value: map[string]string{"app.kubernetes.io/instance":"quality-control", "app.kubernetes.io/name":"quality-control-frontend"}: `selector` does not match template `labels`]

There is a indent issue in first deployment object
change it from
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
to
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
Also there is indent problem in service list, need to change it from
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME}
app.kubernetes.io/managed-by: Tiller
to
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller

Related

Kubernetes Ingress failing to get address

I am trying to set up an ingress for services using nginx ingress on AWS EKS.
I have installed nginx ingress with the code provided on Kubernetes' github page.
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: k8s.gcr.io/ingress-nginx/controller:v1.2.0#sha256:d8196e3bc1e72547c5dec66d6556c0ff92a23f6d0919b206be170bc90d5f9185
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1#sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1#sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
It seems that I was able to create the ingress controller successfully.
Image of ingress controller success.
I have created the deployment and service for the application that I want to deploy on the cluster via yaml like this
apiVersion: v1
kind: Namespace
metadata:
name: evcloud
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: evcloud-vertx
namespace: evcloud
spec:
replicas: 3
selector:
matchLabels:
app: evcloud-tcp
template:
metadata:
labels:
app: evcloud-tcp
spec:
containers:
- name: evcloud-tcp
image: location of ecr container
ports:
- containerPort: 2238
- containerPort: 2237
---
apiVersion: v1
kind: Service
metadata:
name: evcloud-tcp
namespace: evcloud
spec:
selector:
app: evcloud-tcp
ports:
- port: 2238
name: remote
protocol: TCP
targetPort: 2238
- port: 2237
name: central
protocol: TCP
targetPort: 2237
It seems that the pods and services have been deployed successfully.
Image of pod running successfully
I have tried to connect the service of these pods using an ingress via the yaml file below.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: evcloud
namespace: evcloud
annotations:
kubernetes.io/ingress.class: ingress-nginx
spec:
rules:
- host:
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: evcloud-tcp
port:
number: 2237
- host:
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: evcloud-tcp
port:
number: 2238
I was able to successfully make the ingress but the address seems to be empty.
error log
I am really new to Kubernetes, so any sort of feedback would be deeply appreciated!
Thank you in advance!!
The problem was cause because they were in different namespaces.
Solved it by using a headless service with externalName.

monitoring.cereos.com/v1 servicemonitor resource name may not be empty

I am trying to follow this instruction to monitoring my prometheus
https://logiq.ai/scraping-nginx-ingress-controller-metrics-using-helm-prometheus/
anyhow, I got a problem when trying to apply this file configuration
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: kubernetes-ingress
name: service-monitor
namespace: nginx-ingress
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- logiq
selector:
matchLabels:
app: kubernetes-ingress
this is the error
error: error when retrieving current configuration of:
Resource: "monitoring.coreos.com/v1, Resource=servicemonitors", GroupVersionKind: "monitoring.coreos.com/v1, Kind=ServiceMonitor"
Name: "", Namespace: "default"
from server for: "servicemonitor.yaml": resource name may not be empty
I thought it was about the CRD, but my monitoring.coreos.com has installed.
thank you in advance
this is my prometheus-kube clusterrole
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: ingress
creationTimestamp: "2022-01-17T03:09:49Z"
generation: 1
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
chart: kube-prometheus-stack-10.1.3
heritage: Helm
release: prometheus
name: prometheus-kube-prometheus-prometheus
namespace: ingress
resourceVersion: "2311107"
uid: 48a57afb-2d9a-4f9f-9885-33ca66c59b16
spec:
alerting:
alertmanagers:
- apiVersion: v2
name: prometheus-kube-prometheus-alertmanager
namespace: ingress
pathPrefix: /
port: web
baseImage: quay.io/prometheus/prometheus
enableAdminAPI: false
externalUrl: http://prometheus-kube-prometheus-prometheus.ingress:9090
listenLocal: false
logFormat: logfmt
logLevel: info
paused: false
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
release: prometheus
portName: web
probeNamespaceSelector: {}
probeSelector:
matchLabels:
release: prometheus
replicas: 1
retention: 10d
routePrefix: /
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
app: kube-prometheus-stack
release: prometheus
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-kube-prometheus-prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: prometheus
version: v2.21.0
For k8s resources, metadata.name is a required field. You must provide the metadata.name in resource YAML before applying it.
In case of metadata.namespace, if you don't provide it, it defaults to default namespace.
I think you have some unwanted leading spaces before name and namespace fields.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor
namespace: nginx-ingress
labels:
app: kubernetes-ingress
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- logiq
selector:
matchLabels:
app: kubernetes-ingress
Update:
In your Prometheus CR, you have serviceMonitorSelector set.
spec:
serviceMonitorSelector:
matchLabels:
release: prometheus
Add these labels to your serviceMonitor CR.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor
namespace: nginx-ingress
labels:
app: kubernetes-ingress
release: prometheus
Or, you can also update serviceMonitorSelector from the Prometheus CR side.

GKE Kubernetes LoadBalancer returns connection reset by peer

i have encountered a strange problem with my cluster
in my cluster i have a deployment and a Loadbalancer service exposing this deployment
it worked like a charm but suddenly the Loadbalancer started to return an error:
curl: (56) Recv failure: Connection reset by peer
the error is showing while the pod and the loadbalancer are running and have no errors in their logs
what i already tried:
deleting the pod
redeploying service + deployment from scratch
but the issue persist
my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME","app.kubernetes.io/version":"latest"},"name":"APP-NAME","namespace":"namespacex"},"spec":{"ports":[{"name":"web","port":3000}],"selector":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME"},"type":"LoadBalancer"}}
creationTimestamp: "2021-08-03T07:55:00Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/version: latest
name: APP-NAME
namespace: namespacex
resourceVersion: "14583904"
uid: 7fb4d7e6-4316-44e5-8f9b-7a466bc776da
spec:
clusterIP: 10.4.18.36
clusterIPs:
- 10.4.18.36
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30970
port: 3000
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: xx.xxx.xxx.xxx
my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: APP-NAME
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "latest"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
annotations:
checksum/config: 5e6ff0d6fa64b90b0365e9f3939cefc0a619502b32564c4ff712067dbe44ab90
checksum/secret: 76e0a1351da90c0cef06851e3aa9e7c80b415c29b11f473d4a2520ade9c892ce
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: APP-NAME
containers:
- name: APP-NAME
image: 'docker.io/xxxxxxxx:latest'
imagePullPolicy: "Always"
ports:
- name: http
containerPort: 3000
livenessProbe:
httpGet:
path: /balancer/
port: http
readinessProbe:
httpGet:
path: /balancer/
port: http
env:
...
volumeMounts:
- name: config-volume
mountPath: /home/app/config/
resources:
limits:
cpu: 400m
memory: 256Mi
requests:
cpu: 400m
memory: 256Mi
volumes:
- name: config-volume
configMap:
name: app-config
imagePullSecrets:
- name: secret
The issue in my case turned to be a network component (like a FW) blocking the outbound connection after dimming the cluster 'unsafe' for no apparent reason
so in essence it was not a K8s issue but an IT one

503 Service Temporarily Unavailable Nginx + Kibana + AKS

I have deployed Kibana in AKS with the server.basepath of /logs since I want it to be deployed in subpath. I am trying to access Kibana service using nginx controller It is giving 503 service unavailable but Service/Pod is running. Please help me on this.
Kibana Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
template:
metadata:
labels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
spec:
containers:
- name: kibana
image: "docker.elastic.co/kibana/kibana:7.6.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 5601
protocol: TCP
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
- name: SERVER_BASEPATH
value: /logs
- name: SERVER_REWRITEBASEPATH
value: "true"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
Kibana service:
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 5601
protocol: TCP
name: http
selector:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
Kibana Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
annotations:
ingress.kubernetes.io/send-timeout: "600"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /logs/?(.*)
backend:
serviceName: kibana
servicePort: 80
Ensure kibana is running with:
kubectl logs kibana
Check the endpoint for the service is not empty:
kubectl describe svc kibana
Check the ingress is correctly configured:
kubectl describe ingress kibana
check the ingress-controller logs:
kubectl logs -n nginx-ingress-controller-.....
Update:
You can only refer services on the same namespace of the ingress. So try to move ingress to kube-logging namespace.
Checkout this: https://github.com/kubernetes/kubernetes/issues/17088

get functional yaml files from Helm

Is there a way to intercept the yaml files from helm after is has built them, but right before the creation of the objects?
What I'm doing now is to create the objects then get them through:
for file in $(kubectl get OBJECT -n maesh -oname); do kubectl get $i -n maesh --export -oyaml > $file.yaml; done
This works fine. I only have to previously craete the object directory, but works. I just was wondering if there is a clean way of doing this.
And, by the way, the reason is because the service mesh of traefik (maesh) is still in diapers, and the only way to install it is through helm. They don't have yet the files in their repo.
You can do
helm template .
this will output something like
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
---
# Source: my-app/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "release-name-my-app-test-connection"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['release-name-my-app:80']
restartPolicy: Never
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
spec:
containers:
- name: my-app
image: "nginx:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: my-app/templates/ingress.yaml
and that is valid file with k8s objects.