get functional yaml files from Helm - kubernetes-helm

Is there a way to intercept the yaml files from helm after is has built them, but right before the creation of the objects?
What I'm doing now is to create the objects then get them through:
for file in $(kubectl get OBJECT -n maesh -oname); do kubectl get $i -n maesh --export -oyaml > $file.yaml; done
This works fine. I only have to previously craete the object directory, but works. I just was wondering if there is a clean way of doing this.
And, by the way, the reason is because the service mesh of traefik (maesh) is still in diapers, and the only way to install it is through helm. They don't have yet the files in their repo.

You can do
helm template .
this will output something like
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
---
# Source: my-app/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "release-name-my-app-test-connection"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['release-name-my-app:80']
restartPolicy: Never
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
spec:
containers:
- name: my-app
image: "nginx:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: my-app/templates/ingress.yaml
and that is valid file with k8s objects.

Related

monitoring.cereos.com/v1 servicemonitor resource name may not be empty

I am trying to follow this instruction to monitoring my prometheus
https://logiq.ai/scraping-nginx-ingress-controller-metrics-using-helm-prometheus/
anyhow, I got a problem when trying to apply this file configuration
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: kubernetes-ingress
name: service-monitor
namespace: nginx-ingress
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- logiq
selector:
matchLabels:
app: kubernetes-ingress
this is the error
error: error when retrieving current configuration of:
Resource: "monitoring.coreos.com/v1, Resource=servicemonitors", GroupVersionKind: "monitoring.coreos.com/v1, Kind=ServiceMonitor"
Name: "", Namespace: "default"
from server for: "servicemonitor.yaml": resource name may not be empty
I thought it was about the CRD, but my monitoring.coreos.com has installed.
thank you in advance
this is my prometheus-kube clusterrole
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: ingress
creationTimestamp: "2022-01-17T03:09:49Z"
generation: 1
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
chart: kube-prometheus-stack-10.1.3
heritage: Helm
release: prometheus
name: prometheus-kube-prometheus-prometheus
namespace: ingress
resourceVersion: "2311107"
uid: 48a57afb-2d9a-4f9f-9885-33ca66c59b16
spec:
alerting:
alertmanagers:
- apiVersion: v2
name: prometheus-kube-prometheus-alertmanager
namespace: ingress
pathPrefix: /
port: web
baseImage: quay.io/prometheus/prometheus
enableAdminAPI: false
externalUrl: http://prometheus-kube-prometheus-prometheus.ingress:9090
listenLocal: false
logFormat: logfmt
logLevel: info
paused: false
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
release: prometheus
portName: web
probeNamespaceSelector: {}
probeSelector:
matchLabels:
release: prometheus
replicas: 1
retention: 10d
routePrefix: /
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
app: kube-prometheus-stack
release: prometheus
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-kube-prometheus-prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: prometheus
version: v2.21.0
For k8s resources, metadata.name is a required field. You must provide the metadata.name in resource YAML before applying it.
In case of metadata.namespace, if you don't provide it, it defaults to default namespace.
I think you have some unwanted leading spaces before name and namespace fields.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor
namespace: nginx-ingress
labels:
app: kubernetes-ingress
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- logiq
selector:
matchLabels:
app: kubernetes-ingress
Update:
In your Prometheus CR, you have serviceMonitorSelector set.
spec:
serviceMonitorSelector:
matchLabels:
release: prometheus
Add these labels to your serviceMonitor CR.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor
namespace: nginx-ingress
labels:
app: kubernetes-ingress
release: prometheus
Or, you can also update serviceMonitorSelector from the Prometheus CR side.

GKE Kubernetes LoadBalancer returns connection reset by peer

i have encountered a strange problem with my cluster
in my cluster i have a deployment and a Loadbalancer service exposing this deployment
it worked like a charm but suddenly the Loadbalancer started to return an error:
curl: (56) Recv failure: Connection reset by peer
the error is showing while the pod and the loadbalancer are running and have no errors in their logs
what i already tried:
deleting the pod
redeploying service + deployment from scratch
but the issue persist
my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME","app.kubernetes.io/version":"latest"},"name":"APP-NAME","namespace":"namespacex"},"spec":{"ports":[{"name":"web","port":3000}],"selector":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME"},"type":"LoadBalancer"}}
creationTimestamp: "2021-08-03T07:55:00Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/version: latest
name: APP-NAME
namespace: namespacex
resourceVersion: "14583904"
uid: 7fb4d7e6-4316-44e5-8f9b-7a466bc776da
spec:
clusterIP: 10.4.18.36
clusterIPs:
- 10.4.18.36
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30970
port: 3000
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: xx.xxx.xxx.xxx
my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: APP-NAME
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "latest"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
annotations:
checksum/config: 5e6ff0d6fa64b90b0365e9f3939cefc0a619502b32564c4ff712067dbe44ab90
checksum/secret: 76e0a1351da90c0cef06851e3aa9e7c80b415c29b11f473d4a2520ade9c892ce
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: APP-NAME
containers:
- name: APP-NAME
image: 'docker.io/xxxxxxxx:latest'
imagePullPolicy: "Always"
ports:
- name: http
containerPort: 3000
livenessProbe:
httpGet:
path: /balancer/
port: http
readinessProbe:
httpGet:
path: /balancer/
port: http
env:
...
volumeMounts:
- name: config-volume
mountPath: /home/app/config/
resources:
limits:
cpu: 400m
memory: 256Mi
requests:
cpu: 400m
memory: 256Mi
volumes:
- name: config-volume
configMap:
name: app-config
imagePullSecrets:
- name: secret
The issue in my case turned to be a network component (like a FW) blocking the outbound connection after dimming the cluster 'unsafe' for no apparent reason
so in essence it was not a K8s issue but an IT one

Deploy to kubernetes

I want to deploy frontend and backend applications on kubernetes. I write yaml files(i get this from helm temlate):
# Source: quality-control/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-quality-control
labels:
app.kubernetes.io/name: quality-control
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
rules:
- host: "quality-control.ru"
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-quality-control
servicePort: http
---
# Source: quality-control/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: List
items:
- apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: quality-control-frontend
labels:
app.kubernetes.io/name: quality-control-frontend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
logger: external
sourcetype: quality-control-frontend
spec:
containers:
- name: quality-control
image: "registry.***.ru:5050/quality-control-frontend:stable"
imagePullPolicy: Always
env:
- name: spring_profiles_active
value: dev
ports:
- containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /healthcheck
port: 80
protocol: TCP
initialDelaySeconds: 10
periodSeconds: 10
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 1
memory: 1024Mi
- apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
logger: external
sourcetype: quality-control-backend
spec:
containers:
- name: quality-control
image: "registry.***.ru:5050/quality-control-backend:stable"
imagePullPolicy: Always
env:
- name: spring_profiles_active
value: dev
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 1
memory: 1024Mi
---
# Source: quality-control/templates/service.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: quality-control-frontend
labels:
app.kubernetes.io/name: quality-control-frontend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME}
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
But I get an error when deploying:
Error: release quality-control failed: Deployment.apps "quality-control-frontend" is invalid: [spec.selector: Required value, spec.template.metadata.la bels: Invalid value: map[string]string{"app.kubernetes.io/instance":"quality-control", "app.kubernetes.io/name":"quality-control-frontend"}: `selector` does not match template `labels`]
There is a indent issue in first deployment object
change it from
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
to
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
Also there is indent problem in service list, need to change it from
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME}
app.kubernetes.io/managed-by: Tiller
to
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller

503 Service Temporarily Unavailable Nginx + Kibana + AKS

I have deployed Kibana in AKS with the server.basepath of /logs since I want it to be deployed in subpath. I am trying to access Kibana service using nginx controller It is giving 503 service unavailable but Service/Pod is running. Please help me on this.
Kibana Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
template:
metadata:
labels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
spec:
containers:
- name: kibana
image: "docker.elastic.co/kibana/kibana:7.6.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 5601
protocol: TCP
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
- name: SERVER_BASEPATH
value: /logs
- name: SERVER_REWRITEBASEPATH
value: "true"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
Kibana service:
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 5601
protocol: TCP
name: http
selector:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
Kibana Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
annotations:
ingress.kubernetes.io/send-timeout: "600"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /logs/?(.*)
backend:
serviceName: kibana
servicePort: 80
Ensure kibana is running with:
kubectl logs kibana
Check the endpoint for the service is not empty:
kubectl describe svc kibana
Check the ingress is correctly configured:
kubectl describe ingress kibana
check the ingress-controller logs:
kubectl logs -n nginx-ingress-controller-.....
Update:
You can only refer services on the same namespace of the ingress. So try to move ingress to kube-logging namespace.
Checkout this: https://github.com/kubernetes/kubernetes/issues/17088

Minikube Ingress: unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found

I am running minikube with below configuration
Environment:
minikube version: v0.25.2
macOS version: 10.12.6
DriverName: virtualbox
ISO: minikube-v0.25.1.iso
I created Ingress resource to map service:messy-chimp-emauser to path: /
But when I am rolling-out changes to minikube, I am getting below logs in the pod for nginx-ingress-controller
5 controller.go:811] service default/messy-chimp-emauser does not have any active endpoints
5 controller.go:245] unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found
5 controller.go:245] unexpected error reading configmap kube-system/udp-services: configmap kube-system/udp-services was not found
And hence getting HTTP - 503 when trying to access service from browser
Steps to reproduce
STEP 1
minikube addons enable ingress
STEP 2
kubectl create -f kube-resources.yml
(replaced actual-image with k8s.gcr.io/echoserver:1.4)
kube-resources.yml
apiVersion: v1
kind: Service
metadata:
name: messy-chimp-emauser
labels:
app: messy-chimp-emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: emauser
selector:
app: messy-chimp-emauser
release: messy-chimp
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: messy-chimp-emauser
labels:
app: emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: emauser
release: messy-chimp
template:
metadata:
labels:
app: emauser
release: messy-chimp
spec:
containers:
- name: emauser
image: "k8s.gcr.io/echoserver:1.4"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messy-chimp-ema-chart
labels:
app: ema-chart
chart: ema-chart-0.1.0
release: messy-chimp
heritage: Tiller
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: messy-chimp-emauser
servicePort: emauser
Request to please suggest on this.