Cannot add linkerd inject annotation in deployment file - kubernetes

I have a kubernetes deployment file user.yaml -
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
namespace: stage
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9022'
spec:
nodeSelector:
env: stage
containers:
- name: user
image: <docker image path>
imagePullPolicy: Always
resources:
limits:
memory: "512Mi"
cpu: "250m"
requests:
memory: "256Mi"
cpu: "200m"
ports:
- containerPort: 8080
env:
- name: MODE
value: "local"
- name: PORT
value: ":8080"
- name: REDIS_HOST
value: "xxx"
- name: KAFKA_ENABLED
value: "true"
- name: BROKERS
value: "xxx"
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
namespace: stage
name: user
spec:
selector:
app: user
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
namespace: stage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 300Mi
This deployment is already running with linkerd injected with command cat user.yaml | linkerd inject - | kubectl apply -f -
Now I wanted to add linkerd inject annotation (as mentioned here) and use command kubectl apply -f user.yaml just like I use for a deployment without linkerd injected.
However, with modified user.yaml (after adding linkerd.io/inject annotation in deployment) -
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
namespace: stage
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9022'
linkerd.io/inject: enabled
spec:
nodeSelector:
env: stage
containers:
- name: user
image: <docker image path>
imagePullPolicy: Always
resources:
limits:
memory: "512Mi"
cpu: "250m"
requests:
memory: "256Mi"
cpu: "200m"
ports:
- containerPort: 8080
env:
- name: MODE
value: "local"
- name: PORT
value: ":8080"
- name: REDIS_HOST
value: "xxx"
- name: KAFKA_ENABLED
value: "true"
- name: BROKERS
value: "xxx"
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
namespace: stage
name: user
spec:
selector:
app: user
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
namespace: stage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 300Mi
When I run kubectl apply -f user.yaml, it throws error -
service/user unchanged
horizontalpodautoscaler.autoscaling/user configured
Error from server (BadRequest): error when creating "user.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.v1.EnvVar.Value: ReadString: expects " or n, but found 1, error found in #10 byte of ...|,"value":1},{"name":|..., bigger context ...|ue":":8080"}
Can anyone please point out where I have gone wrong in adding annotation?
Thanks

Try with double quotes like below
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9022"
linkerd.io/inject: enabled

Related

Error while trying to create ReplicaSet on Kubernetes with YAML

I'm a beginner with Kubernetes and YAML.
I've been trying to deploy a ReplicaSet with YAML.
This is the file for the ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
label:
app: myapp
spec:
selector:
matchlabels:
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
And this is the Pod file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
However, when I execute the kubectl create -f replicaset.yml command, I get the following error:
The ReplicaSet "myapp-replicaset" is invalid:
spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string(nil), MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: empty selector is invalid for deployment
spec.template.spec.containers: Required value
Your replicaset.yaml indentation seems to be wrong + with some typos.
replicas & template should be inside the spec level. Also, check the marked & corrected typos in labels & matchLabels.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: # labels
app: myapp
spec:
selector:
matchLabels: # matchLabels
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80

Prometheus custom metric does not appear in custom.metrics kubernetes

I configure all of the following configurations but the request_per_second does not appear when I type the command
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
In the node.js that should be monitored I installed prom-client, I tested the /metrics and it's working very well and the metric "resquest_count" is the object it returns
Here are the important parts of that node code
(...)
const counter = new client.Counter({
name: 'request_count',
help: 'The total number of processed requests'
});
(...)
router.get('/metrics', async (req, res) => {
res.set('Content-Type', client.register.contentType)
res.end(await client.register.metrics())
})
This is my service monitor configuration
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: un1qnx-validation-service-monitor-node
namespace: default
labels:
app: node-request-persistence
release: prometheus
spec:
selector:
matchLabels:
app: node-request-persistence
endpoints:
- interval: 5s
path: /metrics
port: "80"
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
namespaceSelector:
matchNames:
- un1qnx-aks-development
This the node-request-persistence configuration
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: node-request-persistence
namespace: un1qnx-aks-development
name: node-request-persistence
spec:
selector:
matchLabels:
app: node-request-persistence
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:
app: node-request-persistence
spec:
containers:
- name: node-request-persistence
image: node-request-persistence
imagePullPolicy: Always # IfNotPresent
resources:
requests:
memory: "200Mi" # Gi
cpu: "100m"
limits:
memory: "400Mi"
cpu: "500m"
ports:
- name: node-port
containerPort: 80
This is the prometheus adapter
prometheus:
url: http://prometheus-server.default.svc.cluster.local
port: 9090
rules:
custom:
- seriesQuery: 'request_count{namespace!="", pod!=""}'
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
name:
as: "request_per_second"
metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))"
This is the hpa
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: un1qnx-validation-service-hpa-angle
namespace: un1qnx-aks-development
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: un1qnx-validation-service-angle
metrics:
- type: Pods
pods:
metric:
name: request_per_second
target:
type: AverageValue
averageValue: "5"
The command
kubectl get hpa -n un1qnx-aks-development
results in "unknown/5"
Also, the command
kubectl get --raw "http://prometheus-server.default.svc.cluster.local:9090/api/v1/series"
Results in
Error from server (NotFound): the server could not find the requested resource
I think it should return some value about the collected metrics... I think that the problem is from the service monitor, but I am new to this
As you noticed I am trying to scale a deployment based on another deployment pods, don't know if there is a problem there
I appreciate an answer, because this is for my thesis
kubernetes - version 1.19.9
Prometheus - chart prometheus-14.2.1 app version 2.26.0
Prometheus Adapter - chart 2.14.2 app version 0.8.4
And all where installed using helm
After some time I found the problems and I changed the following
Changed the port on the prometheus adapter, the time on the query and the names of the resource override. But to know the names of the resources override you need to port forward to the prometheus server and check the labels on the targets page of the app that you are monitoring.
prometheus:
url: http://prometheus-server.default.svc.cluster.local
port: 80
rules:
custom:
- seriesQuery: 'request_count{kubernetes_namespace!="", kubernetes_pod_name!=""}'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: "request_count"
as: "request_count"
metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>))"
I also added annotations on the deployment yaml
spec:
selector:
matchLabels:
app: node-request-persistence
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:

ValidatingWebhookConfiguration does not stop pod without annotations being created

I'm trying to build an admission controller to enforce pod annotations on our cluster.
I was able to build a webhook service and deploy it. For testing purposes, I'm changing the server code to respond with Allowed: false as a default response to any request but it doesn't stop pods from getting created.
In the logs, I'm seeing the request hit the server but it seems as though the kubeapi-server is not receiving or not adhering to the response.
2021/01/25 02:29:15 &AdmissionResponse{UID:,Allowed:false,Result:&v1.Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,RemainingItemCount:nil,},Status:,Message:,Reason:,Details:nil,Code:0,},Patch:nil,PatchType:nil,AuditAnnotations:map[string]string{},Warnings:[],}
Below are the service deployment file and the validating webhook configuration.
Appreciate any ideas/recommendations!
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook-server
namespace: webhook-demo
labels:
app: webhook-server
spec:
replicas: 1
selector:
matchLabels:
app: webhook-server
template:
metadata:
labels:
app: webhook-server
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1234
containers:
- name: webhook-server
image: demo/admission-controller-webhook-demo:latest
imagePullPolicy: Never
ports:
- containerPort: 8443
name: webhook-api
resources:
requests:
cpu: "100m"
memory: "128M"
limits:
cpu: "250m"
memory: "256M"
volumeMounts:
- name: webhook-tls-certs
mountPath: /run/secrets/tls
readOnly: true
volumes:
- name: webhook-tls-certs
secret:
secretName: webhook-server-tls
---
apiVersion: v1
kind: Service
metadata:
name: webhook-server
namespace: webhook-demo
spec:
selector:
app: webhook-server
ports:
- port: 443
targetPort: webhook-api
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: webhook-server
webhooks:
- name: webhook-server.webhook-demo.svc
rules:
- apiGroups: ["*"]
apiVersions: ["*"]
operations: ["CREATE","UPDATE"]
resources: ["pods","deployments", "replicasets"]
timeoutSeconds: 5
clientConfig:
service:
name: webhook-server
namespace: webhook-demo
path: "/validate"
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t<truncated>
sideEffects: None
admissionReviewVersions: ["v1beta1"]
I was sending the Admission Response object in the response but the API server actually needs the Admission Review object which encapsulates the admission response and request objects. :facepalm

i am not able to see logs on kibana dashboard

I am using the ELK stack (elasticsearch, logstash, kibana) for log processing and analysis in a Kubernetes environment. To capture logs I am using filebeat.
The service account, the cluster role, and the cluster role binding of elasticsearch below yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch
labels:
k8s-app: elasticsearch
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch
labels:
k8s-app: elasticsearch
subjects:
- kind: ServiceAccount
name: elasticsearch
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch
apiGroup: ""
elasticsearch service yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch
externalIPs:
- 10.10.0.82
Elastic search statesul set yaml below:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
spec:
serviceName: elasticsearch
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
serviceAccountName: elasticsearch
containers:
- image: elasticsearch:6.8.4
name: elasticsearch
resources:
limits:
cpu: 1000m
memory: "2Gi"
requests:
cpu: 100m
memory: "1Gi"
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: data
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-init
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
k8s-app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
pv & pvc0 yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: elklogs-pv0
namespace: kube-system
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 10.10.0.131
path: /opt/data/vol/0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-elasticsearch-0
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
pv_pvc1.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elklogs-pv1
namespace: kube-system
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 10.10.0.131
path: /opt/data/vol/1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-elasticsearch-1
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
logstash_svc.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: kube-system
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
externalIPs:
- 10.10.0.82
logstash_config.yaml
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: kube-system
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["http://10.10.0.82:9200"]
}
}
logstash deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
filebeat.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ["http://10.10.0.82:5044"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.8.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}
kibana.yaml
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.8.4
env:
- name: ELASTICSEARCH_URL
value: http://10.10.0.82:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 32010
selector:
k8s-app: kibana-logging
kubectl get svc -n kube-system
elasticsearch ClusterIP 10.43.50.63 10.10.0.82 9200/TCP 31m
kibana-logging NodePort 10.43.58.127 10.10.0.82 5601:32010/TCP 4m4s
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 23d
logstash-service ClusterIP 10.43.130.36 10.10.0.82 5044/TCP 30m
filebeat pod logs :
2020-11-04T16:42:22.857Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203-json.log
2020-11-04T16:42:22.983Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://logstash-service:9600))
2020-11-04T16:42:52.412Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":270,"time":{"ms":271}},"total":{"ticks":740,"time":{"ms":745},"value":740},"user":{"ticks":470,"time":{"ms":474}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":97},"info":{"ephemeral_id":"6584086a-eff4-46b5-9be0-93892dad9d97","uptime":{"ms":30191}},"memstats":{"gc_next":36421840,"memory_alloc":32140904,"memory_total":55133048,"rss":65593344}},"filebeat":{"events":{"active":4214,"added":4219,"done":5},"harvester":{"open_files":89,"running":88,"started":88}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"type":"logstash"},"pipeline":{"clients":2,"events":{"active":4117,"filtered":88,"published":4116,"total":4205}}},"registrar":{"states":{"current":5,"update":5},"writes":{"success":6,"total":6}},"system":{"cpu":{"cores":8},"load":{"1":1.9,"15":0.61,"5":0.9,"norm":{"1":0.2375,"15":0.0763,"5":0.1125}}}}}}
2020-11-04T16:42:54.289Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash-service:5044)): dial tcp 10.43.145.162:5044: i/o timeout
2020-11-04T16:42:54.289Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://logstash-service:5044)) with 1 reconnect attempt(s)
logstash pod logs :
[WARN ] 2020-11-04 15:45:04.648 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
What I have understood from your architecture, you are using Filebeat >> Logstash >> Elasticsearch >> Kibana
So, in the filebeat.yml, you have selected output as logstash. But, you have given wrong port for logstash output in filebeat.yml.
It should be:
output.logstash:
hosts: ['http://195.134.187.25:5044']
As, if you see in logstash_config.yaml, you have given 5044 as beats input. So, make the changes in filebeat.yml in output.logstash

kubernetes StorageClass does not retain existing data

My Kubernetes StorageClass volume doesn't retain existing data when the pod is deleted and deployed back with my postgresql database. When I delete the pod, the new pod is created but the database is empty.
I have followed variations of the different versions of the tutorials (https://kubernetes.io/docs/concepts/storage/persistent-volumes/) but nothing seems to work.
I paste all the YAML files cause the problem might be in the combination.
storage-google.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: spingular-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 7Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-east4-a
jhipsterpress-postgresql.yml
apiVersion: v1
kind: Secret
metadata:
name: jhipsterpress-postgresql
namespace: default
labels:
app: jhipsterpress-postgresql
type: Opaque
data:
postgres-password: NjY0NXJxd24=
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: jhipsterpress-postgresql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: spingular-pvc
containers:
- name: postgres
image: postgres:10.4
env:
- name: POSTGRES_USER
value: jhipsterpress
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/
---
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
selector:
app: jhipsterpress-postgresql
ports:
- name: postgresqlport
port: 5432
type: LoadBalancer
jhipsterpress-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jhipsterpress
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jhipsterpress
version: "v1"
template:
metadata:
labels:
app: jhipsterpress
version: "v1"
spec:
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 jhipsterpress-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: jhipsterpress-app
image: galore/jhipsterpress
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://jhipsterpress-postgresql.default.svc.cluster.local:5432/jhipsterpress
- name: SPRING_DATASOURCE_USERNAME
value: jhipsterpress
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
jhipsterpress-service.yml
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress
namespace: default
labels:
app: jhipsterpress
spec:
selector:
app: jhipsterpress
type: LoadBalancer
ports:
- name: http
port: 8080
When I included a Retain Policy I was getting this error:
#cloudshell:~ (academic-veld-230622)$ kubectl apply -f storage-google.yaml
error: error validating "storage-google.yaml": error validating data:
ValidationError(PersistentVolumeClaim.spec): unknown field "persistentVolumeReclaimPolicy" in io.k8s.api.core.v1.PersistentVolumeClaimSpec; if you choose to ignore these errors, turn validation off with --validate=false
Please, if you know of a complete example on a public image that works (in postgresql, I can make it work with Mongo), I will really appreciate it.
Thanks to all.
Note that for this to work you need to have your PVC dynamically provision a PV to satisfy its requirements, then there will be a permanent binding between the PVC and PV and every time your workload uses the PVC then it will use the same PV. Specifically indicated by this excerpt:
If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC
If in your case the Google Persistent Disk is being provisioned by the PVC, and you can verify that on GCP it's the same PV used every time, then it's probably an issue with the pod startup process where it's removing all the data. (Is there any reason why you are using /var/lib/postgresql/ vs /var/lib/postgresql?)
Also, persistentVolumeReclaimPolicy: Retain applies to a PV, not a PVC. For dynamically provisioned PVs the value is Delete. In your case, it wouldn't apply because your dynamically provisioned volume should be bound to your PVC. In other words, you are not reclaiming the volume.
Having said all that the recommended way to deploy a DB is using StatefulSets similar to this mysql example using a volumeClaimTemplate.