I'm attempting to use the Statistics Gathering Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: http://logstash.monitoring-observability:9000. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:
2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:
jenkins#jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
I also get the following error in the logstash logs:
[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
Here is my jenkins-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
Here is my jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
Here is my logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Here is my logstash-service.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
Here is my logstash configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port 5044 and get the same results. It seems as though my logstash instance is not actually listening on the containerPort. Why might this be?
I resolved this error by updating the configmap to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
Note that all references to the jenkins host have been removed.
prometheus-prometheus-kube-prometheus-prometheus-0 0/2 Terminating 0 4s
alertmanager-prometheus-kube-prometheus-alertmanager-0 0/2 Terminating 0 10s
After updating EKS cluster to 1.16 from 1.15 everything works fine except these two pods, they keep on terminating and unable to initialise. Hence, prometheus monitoring does not work. I am getting below errors while describing the pods.
Error: failed to start container "prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:362: creating new parent process caused: container_linux.go:1941: running lstat on namespace path "/proc/29271/ns/ipc" caused: lstat /proc/29271/ns/ipc: no such file or directory: unknown
Error: failed to start container "config-reloader": Error response from daemon: cannot join network of a non running container: 7e139521980afd13dad0162d6859352b0b2c855773d6d4062ee3e2f7f822a0b3
Error: cannot find volume "config" to mount into container "config-reloader"
Error: cannot find volume "config" to mount into container "prometheus"
here is my yaml file for the deployment:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
creationTimestamp: "2021-04-30T16:39:14Z"
deletionGracePeriodSeconds: 600
deletionTimestamp: "2021-04-30T16:49:14Z"
generateName: prometheus-prometheus-kube-prometheus-prometheus-
labels:
app: prometheus
app.kubernetes.io/instance: prometheus-kube-prometheus-prometheus
app.kubernetes.io/managed-by: prometheus-operator
app.kubernetes.io/name: prometheus
app.kubernetes.io/version: 2.26.0
controller-revision-hash: prometheus-prometheus-kube-prometheus-prometheus-56d9fcf57
operator.prometheus.io/name: prometheus-kube-prometheus-prometheus
operator.prometheus.io/shard: "0"
prometheus: prometheus-kube-prometheus-prometheus
statefulset.kubernetes.io/pod-name: prometheus-prometheus-kube-prometheus-prometheus-0
name: prometheus-prometheus-kube-prometheus-prometheus-0
namespace: mo
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: prometheus-prometheus-kube-prometheus-prometheus
uid: 326a09f2-319c-449d-904a-1dd0019c6d80
resourceVersion: "9337443"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-prometheus-kube-prometheus-prometheus-0
uid: e2be062f-749d-488e-a6cc-42ef1396851b
spec:
containers:
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=10d
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --web.external-url=http://prometheus-kube-prometheus-prometheus.monitoring:9090
- --web.route-prefix=/
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
name: prometheus
ports:
- containerPort: 9090
name: web
protocol: TCP
readinessProbe:
failureThreshold: 120
httpGet:
path: /-/ready
port: web
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
readOnly: true
- mountPath: /etc/prometheus/certs
name: tls-assets
readOnly: true
- mountPath: /prometheus
name: prometheus-prometheus-kube-prometheus-prometheus-db
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
- args:
- --listen-address=:8080
- --reload-url=http://localhost:9090/-/reload
- --config-file=/etc/prometheus/config/prometheus.yaml.gz
- --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
- --watched-dir=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
command:
- /bin/prometheus-config-reloader
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SHARD
value: "0"
image: quay.io/prometheus-operator/prometheus-config-reloader:v0.47.0
imagePullPolicy: IfNotPresent
name: config-reloader
ports:
- containerPort: 8080
name: reloader-web
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config
name: config
- mountPath: /etc/prometheus/config_out
name: config-out
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: prometheus-prometheus-kube-prometheus-prometheus-0
nodeName: ip-10-1-49-45.ec2.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccount: prometheus-kube-prometheus-prometheus
serviceAccountName: prometheus-kube-prometheus-prometheus
subdomain: prometheus-operated
terminationGracePeriodSeconds: 600
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: config
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus
- name: tls-assets
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus-tls-assets
- emptyDir: {}
name: config-out
- configMap:
defaultMode: 420
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- emptyDir: {}
name: prometheus-prometheus-kube-prometheus-prometheus-db
- name: prometheus-kube-prometheus-prometheus-token-mh66q
secret:
defaultMode: 420
secretName: prometheus-kube-prometheus-prometheus-token-mh66q
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-04-30T16:39:14Z"
status: "True"
type: PodScheduled
phase: Pending
qosClass: Burstable
If someone needs to know the answer, in my case(the above situation) there were 2 Prometheus operators running in different different namespace, 1 in default & another monitoring namespace. so I removed the one from the default namespace and it resolved my pods crashing issue.
I am creating a inferenceservice instance using the given yaml file-
apiVersion: "serving.kubeflow.org/v1alpha2"
kind: "InferenceService"
metadata:
minReplicas: 1
name: "sklearn-iris"
spec:
default:
predictor:
sklearn:
storageUri: "gs://kfserving-samples/models/sklearn/iris"
Now this will create a deployment, and since I am working from behind the proxy I am injecting the env variables for proxies as:
kubectl set env deployment/sklearn-iris-predictor-default-dclkq-deployment -n kfserving-test http_proxy={http_proxy value}
kubectl set env deployment/sklearn-iris-predictor-default-dclkq-deployment -n kfserving-test https_proxy={https_proxy value}
kubectl set env deployment/sklearn-iris-predictor-default-dclkq-deployment -n kfserving-test no_proxy={no_proxy value}
Since I have fixed minimum replicas as 1 so it make sure that one pod is there even without the traffic, now when this pod is being made it runs 1 init-container and 2 containers, so doing the kubectl set env thing, the proxies are being set in the container variables but not in the init-container and it is failing, so overall things are failing.
So in crux is there any way to set proxy/env details in init-container, without having the availibility of whole deployment yaml file to configure the env?
Edit:
kubectl edit deploy/deployment_name -o yaml -n namespace gives
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/minScale: "1"
deployment.kubernetes.io/revision: "1"
internal.serving.kubeflow.org/storage-initializer-sourceuri: gs://kfserving-samples/models/sklearn/iris
serving.knative.dev/creator: system:serviceaccount:kfserving-system:default
creationTimestamp: "2021-02-03T06:51:29Z"
generation: 3
labels:
app: sklearn-iris-predictor-default-6xcgj
component: predictor
service.istio.io/canonical-name: sklearn-iris-predictor-default
service.istio.io/canonical-revision: sklearn-iris-predictor-default-6xcgj
serving.knative.dev/configuration: sklearn-iris-predictor-default
serving.knative.dev/configurationGeneration: "1"
serving.knative.dev/revision: sklearn-iris-predictor-default-6xcgj
serving.knative.dev/revisionUID: 470195f7-db41-4e9c-ac6b-c96c79a1218f
serving.knative.dev/service: sklearn-iris-predictor-default
serving.kubeflow.org/inferenceservice: sklearn-iris
name: sklearn-iris-predictor-default-6xcgj-deployment
namespace: kfserving-test
ownerReferences:
- apiVersion: serving.knative.dev/v1
blockOwnerDeletion: true
controller: true
kind: Revision
name: sklearn-iris-predictor-default-6xcgj
uid: 470195f7-db41-4e9c-ac6b-c96c79a1218f
resourceVersion: "633491"
selfLink: /apis/apps/v1/namespaces/kfserving-test/deployments/sklearn-iris-predictor-default-6xcgj-deployment
uid: 2fc10485-ba59-4eaf-b62a-480ecf4ab078
spec:
progressDeadlineSeconds: 120
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
serving.knative.dev/revisionUID: 470195f7-db41-4e9c-ac6b-c96c79a1218f
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/minScale: "1"
internal.serving.kubeflow.org/storage-initializer-sourceuri: gs://kfserving-samples/models/sklearn/iris
serving.knative.dev/creator: system:serviceaccount:kfserving-system:default
creationTimestamp: null
labels:
app: sklearn-iris-predictor-default-6xcgj
component: predictor
service.istio.io/canonical-name: sklearn-iris-predictor-default
service.istio.io/canonical-revision: sklearn-iris-predictor-default-6xcgj
serving.knative.dev/configuration: sklearn-iris-predictor-default
serving.knative.dev/configurationGeneration: "1"
serving.knative.dev/revision: sklearn-iris-predictor-default-6xcgj
serving.knative.dev/revisionUID: 470195f7-db41-4e9c-ac6b-c96c79a1218f
serving.knative.dev/service: sklearn-iris-predictor-default
serving.kubeflow.org/inferenceservice: sklearn-iris
spec:
containers:
- args:
- --model_name=sklearn-iris
- --model_dir=/mnt/models
- --http_port=8080
- --workers=0
env:
- name: http_proxy
value: {proxy data}
- name: https_proxy
value: {proxy data}
- name: no_proxy
value: {no proxy data}
- name: PORT
value: "8080"
- name: K_REVISION
value: sklearn-iris-predictor-default-6xcgj
- name: K_CONFIGURATION
value: sklearn-iris-predictor-default
- name: K_SERVICE
value: sklearn-iris-predictor-default
- name: K_INTERNAL_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: K_INTERNAL_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: gcr.io/kfserving/sklearnserver#sha256:fd87e984a6092aae6efd28a2d596aac16d83d207a0269a503a221cb24cfd2f39
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
httpGet:
path: /wait-for-drain
port: 8022
scheme: HTTP
name: kfserving-container
ports:
- containerPort: 8080
name: user-port
protocol: TCP
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/log
name: knative-var-log
subPathExpr: $(K_INTERNAL_POD_NAMESPACE)_$(K_INTERNAL_POD_NAME)_kfserving-container
- env:
- name: SERVING_NAMESPACE
value: kfserving-test
- name: SERVING_SERVICE
value: sklearn-iris-predictor-default
- name: SERVING_CONFIGURATION
value: sklearn-iris-predictor-default
- name: SERVING_REVISION
value: sklearn-iris-predictor-default-6xcgj
- name: QUEUE_SERVING_PORT
value: "8012"
- name: CONTAINER_CONCURRENCY
value: "0"
- name: REVISION_TIMEOUT_SECONDS
value: "300"
- name: SERVING_POD
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SERVING_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SERVING_LOGGING_CONFIG
value: |-
{
"level": "info",
"development": false,
"outputPaths": ["stdout"],
"errorOutputPaths": ["stderr"],
"encoding": "json",
"encoderConfig": {
"timeKey": "ts",
"levelKey": "level",
"nameKey": "logger",
"callerKey": "caller",
"messageKey": "msg",
"stacktraceKey": "stacktrace",
"lineEnding": "",
"levelEncoder": "",
"timeEncoder": "iso8601",
"durationEncoder": "",
"callerEncoder": ""
}
}
- name: SERVING_LOGGING_LEVEL
- name: SERVING_REQUEST_LOG_TEMPLATE
value: '{"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl":
"{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}",
"status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent":
"{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}",
"serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}",
"latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"},
"traceId": "{{index .Request.Header "X-B3-Traceid"}}"}'
- name: SERVING_ENABLE_REQUEST_LOG
value: "false"
- name: SERVING_REQUEST_METRICS_BACKEND
value: prometheus
- name: TRACING_CONFIG_BACKEND
value: none
- name: TRACING_CONFIG_ZIPKIN_ENDPOINT
- name: TRACING_CONFIG_STACKDRIVER_PROJECT_ID
- name: TRACING_CONFIG_DEBUG
value: "false"
- name: TRACING_CONFIG_SAMPLE_RATE
value: "0.1"
- name: USER_PORT
value: "8080"
- name: SYSTEM_NAMESPACE
value: knative-serving
- name: METRICS_DOMAIN
value: knative.dev/internal/serving
- name: SERVING_READINESS_PROBE
value: '{"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}'
- name: ENABLE_PROFILING
value: "false"
- name: SERVING_ENABLE_PROBE_REQUEST_LOG
value: "false"
- name: METRICS_COLLECTOR_ADDRESS
image: gcr.io/knative-releases/knative.dev/serving/cmd/queue#sha256:0db974f58b48b219ab8047e11b481c2bbda52b7a2d54db5ed58e8659748ec125
imagePullPolicy: IfNotPresent
name: queue-proxy
ports:
- containerPort: 8022
name: http-queueadm
protocol: TCP
- containerPort: 9090
name: http-autometric
protocol: TCP
- containerPort: 9091
name: http-usermetric
protocol: TCP
- containerPort: 8012
name: queue-port
protocol: TCP
readinessProbe:
exec:
command:
- /ko-app/queue
- -probe-period
- "0"
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources:
requests:
cpu: 25m
securityContext:
allowPrivilegeEscalation: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 300
volumes:
- emptyDir: {}
name: knative-var-log
status:
conditions:
- lastTransitionTime: "2021-02-03T06:51:29Z"
lastUpdateTime: "2021-02-03T06:51:29Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-02-03T07:38:08Z"
lastUpdateTime: "2021-02-03T07:38:08Z"
message: ReplicaSet "sklearn-iris-predictor-default-6xcgj-deployment-7c97895d96"
has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
kubectl describe pod podname -n namespace gives
Name: sklearn-iris-predictor-default-6xcgj-deployment-7c97895d96vqbgr
Namespace: kfserving-test
Priority: 0
Node: minikube/192.168.99.109
Start Time: Wed, 03 Feb 2021 13:50:33 +0530
Labels: app=sklearn-iris-predictor-default-6xcgj
component=predictor
pod-template-hash=7c97895d96
service.istio.io/canonical-name=sklearn-iris-predictor-default
service.istio.io/canonical-revision=sklearn-iris-predictor-default-6xcgj
serving.knative.dev/configuration=sklearn-iris-predictor-default
serving.knative.dev/configurationGeneration=1
serving.knative.dev/revision=sklearn-iris-predictor-default-6xcgj
serving.knative.dev/revisionUID=470195f7-db41-4e9c-ac6b-c96c79a1218f
serving.knative.dev/service=sklearn-iris-predictor-default
serving.kubeflow.org/inferenceservice=sklearn-iris
Annotations: autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/minScale: 1
internal.serving.kubeflow.org/storage-initializer-sourceuri: gs://kfserving-samples/models/sklearn/iris
serving.knative.dev/creator: system:serviceaccount:kfserving-system:default
Status: Pending
IP: 172.17.0.22
Controlled By: ReplicaSet/sklearn-iris-predictor-default-6xcgj-deployment-7c97895d96
Init Containers:
storage-initializer:
Container ID: docker://262a195f39fad7dfc62b494d9c9bbda8c7cdeee2f4b903b2948b809c5e00fb0c
Image: gcr.io/kfserving/storage-initializer:v0.5.0-rc2
Image ID: docker-pullable://gcr.io/kfserving/storage-initializer#sha256:9a16e6af385412bb62fd7e09f6d749e107e3ad92c488039acd20361fb5dd68cc
Port: <none>
Host Port: <none>
Args:
gs://kfserving-samples/models/sklearn/iris
/mnt/models
State: Running
Started: Wed, 03 Feb 2021 13:58:00 +0530
Last State: Terminated
Reason: Error
Message: ownload(src_uri, dest_path)
File "/usr/local/lib/python3.7/site-packages/kfserving/storage.py", line 58, in download
Storage._download_gcs(uri, out_dir)
File "/usr/local/lib/python3.7/site-packages/kfserving/storage.py", line 116, in _download_gcs
for blob in blobs:
File "/usr/local/lib/python3.7/site-packages/google/api_core/page_iterator.py", line 212, in _items_iter
for page in self._page_iter(increment=False):
File "/usr/local/lib/python3.7/site-packages/google/api_core/page_iterator.py", line 243, in _page_iter
page = self._next_page()
File "/usr/local/lib/python3.7/site-packages/google/api_core/page_iterator.py", line 369, in _next_page
response = self._get_next_page_response()
File "/usr/local/lib/python3.7/site-packages/google/api_core/page_iterator.py", line 419, in _get_next_page_response
method=self._HTTP_METHOD, path=self.path, query_params=params
File "/usr/local/lib/python3.7/site-packages/google/cloud/storage/_http.py", line 63, in api_request
return call()
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 206, in retry_target
last_exc,
File "<string>", line 3, in raise_from
google.api_core.exceptions.RetryError: Deadline of 120.0s exceeded while calling functools.partial(functools.partial(<bound method JSONConnection.api_request of <google.cloud.storage._http.Connection object at 0x7fd57c954c50>>, timeout=60, method='GET', path='/b/kfserving-samples/o', query_params={'projection': 'noAcl', 'prefix': 'models/sklearn/iris/'})), last exception: HTTPSConnectionPool(host='storage.googleapis.com', port=443): Max retries exceeded with url: /storage/v1/b/kfserving-samples/o?projection=noAcl&prefix=models%2Fsklearn%2Firis%2F&prettyPrint=false (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd57c91bb90>: Failed to establish a new connection: [Errno 113] No route to host'))
Exit Code: 1
Started: Wed, 03 Feb 2021 13:53:53 +0530
Finished: Wed, 03 Feb 2021 13:57:45 +0530
Ready: False
Restart Count: 2
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/mnt/models from kfserving-provision-location (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hw2rw (ro)
Containers:
kfserving-container:
Container ID:
Image: gcr.io/kfserving/sklearnserver#sha256:fd87e984a6092aae6efd28a2d596aac16d83d207a0269a503a221cb24cfd2f39
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
Args:
--model_name=sklearn-iris
--model_dir=/mnt/models
--http_port=8080
--workers=0
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 2Gi
Requests:
cpu: 1
memory: 2Gi
Environment:
PORT: 8080
K_REVISION: sklearn-iris-predictor-default-6xcgj
K_CONFIGURATION: sklearn-iris-predictor-default
K_SERVICE: sklearn-iris-predictor-default
K_INTERNAL_POD_NAME: sklearn-iris-predictor-default-6xcgj-deployment-7c97895d96vqbgr (v1:metadata.name)
K_INTERNAL_POD_NAMESPACE: kfserving-test (v1:metadata.namespace)
Mounts:
/mnt/models from kfserving-provision-location (ro)
/var/log from knative-var-log (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hw2rw (ro)
queue-proxy:
Container ID:
Image: gcr.io/knative-releases/knative.dev/serving/cmd/queue#sha256:0db974f58b48b219ab8047e11b481c2bbda52b7a2d54db5ed58e8659748ec125
Image ID:
Ports: 8022/TCP, 9090/TCP, 9091/TCP, 8012/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 25m
Readiness: exec [/ko-app/queue -probe-period 0] delay=0s timeout=10s period=10s #success=1 #failure=3
Environment:
SERVING_NAMESPACE: kfserving-test
SERVING_SERVICE: sklearn-iris-predictor-default
SERVING_CONFIGURATION: sklearn-iris-predictor-default
SERVING_REVISION: sklearn-iris-predictor-default-6xcgj
QUEUE_SERVING_PORT: 8012
CONTAINER_CONCURRENCY: 0
REVISION_TIMEOUT_SECONDS: 300
SERVING_POD: sklearn-iris-predictor-default-6xcgj-deployment-7c97895d96vqbgr (v1:metadata.name)
SERVING_POD_IP: (v1:status.podIP)
SERVING_LOGGING_CONFIG: {
"level": "info",
"development": false,
"outputPaths": ["stdout"],
"errorOutputPaths": ["stderr"],
"encoding": "json",
"encoderConfig": {
"timeKey": "ts",
"levelKey": "level",
"nameKey": "logger",
"callerKey": "caller",
"messageKey": "msg",
"stacktraceKey": "stacktrace",
"lineEnding": "",
"levelEncoder": "",
"timeEncoder": "iso8601",
"durationEncoder": "",
"callerEncoder": ""
}
}
SERVING_LOGGING_LEVEL:
SERVING_REQUEST_LOG_TEMPLATE: {"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl": "{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}", "status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent": "{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}", "serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}", "latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"}, "traceId": "{{index .Request.Header "X-B3-Traceid"}}"}
SERVING_ENABLE_REQUEST_LOG: false
SERVING_REQUEST_METRICS_BACKEND: prometheus
TRACING_CONFIG_BACKEND: none
TRACING_CONFIG_ZIPKIN_ENDPOINT:
TRACING_CONFIG_STACKDRIVER_PROJECT_ID:
TRACING_CONFIG_DEBUG: false
TRACING_CONFIG_SAMPLE_RATE: 0.1
USER_PORT: 8080
SYSTEM_NAMESPACE: knative-serving
METRICS_DOMAIN: knative.dev/internal/serving
SERVING_READINESS_PROBE: {"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}
ENABLE_PROFILING: false
SERVING_ENABLE_PROBE_REQUEST_LOG: false
METRICS_COLLECTOR_ADDRESS:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hw2rw (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
knative-var-log:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-hw2rw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hw2rw
Optional: false
kfserving-provision-location:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m50s default-scheduler Successfully assigned kfserving-test/sklearn-iris-predictor-default-6xcgj-deployment-7c97895d96vqbgr to minikube
Warning BackOff 36s kubelet, minikube Back-off restarting failed container
Normal Pulled 24s (x3 over 7m47s) kubelet, minikube Container image "gcr.io/kfserving/storage-initializer:v0.5.0-rc2" already present on machine
Normal Created 23s (x3 over 7m47s) kubelet, minikube Created container storage-initializer
Normal Started 23s (x3 over 7m46s) kubelet, minikube Started container storage-initializer
I don't think it is possible to do the same via kubectl set env command.
When I tried to run the updated command on my local setup passing the initcontainer name, it returns with the message:
warning: Deployment/dummy does not have any containers matching "dummy-init"
Command used:
kubectl set env -n dummy-ns deploy/dummy -c "dummy-init" dummy_env="true"
You can however use the kubectl edit command which will open the full yaml in edit mode and you can add the required environment variable to whichever container you need and save the spec. This will create a new pod with the new spec.
kubectl edit -n dummy-ns deploy/dummy -o yaml
I am upgrade apache flink 1.10 to apache flink 1.11 in kubernetes, but the jobmanager kubernetes pod log shows:
cp: cannot stat '/opt/flink/opt/flink-metrics-prometheus-*.jar': No such file or directory
this is my jobmanager pod yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: report-flink-jobmanager
namespace: middleware
selfLink: /apis/apps/v1/namespaces/middleware/deployments/report-flink-jobmanager
uid: b7bd8f0d-cddb-44e7-8bbe-b96e68dbfbcd
resourceVersion: '13655071'
generation: 44
creationTimestamp: '2020-06-08T02:11:33Z'
labels:
app.kubernetes.io/instance: report-flink
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: flink
app.kubernetes.io/version: 1.10.0
component: jobmanager
helm.sh/chart: flink-0.1.15
annotations:
deployment.kubernetes.io/revision: '6'
meta.helm.sh/release-name: report-flink
meta.helm.sh/release-namespace: middleware
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: report-flink
app.kubernetes.io/name: flink
component: jobmanager
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: report-flink
app.kubernetes.io/name: flink
component: jobmanager
spec:
volumes:
- name: flink-config-volume
configMap:
name: report-flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml.tpl
- key: log4j.properties
path: log4j.properties
- key: security.properties
path: security.properties
defaultMode: 420
- name: flink-pro-persistent-storage
persistentVolumeClaim:
claimName: flink-pv-claim
containers:
- name: jobmanager
image: 'flink:1.11'
command:
- /bin/bash
- '-c'
- >-
cp /opt/flink/opt/flink-metrics-prometheus-*.jar
/opt/flink/opt/flink-s3-fs-presto-*.jar /opt/flink/lib/ && wget
https://repo1.maven.org/maven2/com/github/oshi/oshi-core/3.4.0/oshi-core-3.4.0.jar
-O /opt/flink/lib/oshi-core-3.4.0.jar && wget
https://repo1.maven.org/maven2/net/java/dev/jna/jna/5.4.0/jna-5.4.0.jar
-O /opt/flink/lib/jna-5.4.0.jar && wget
https://repo1.maven.org/maven2/net/java/dev/jna/jna-platform/5.4.0/jna-platform-5.4.0.jar
-O /opt/flink/lib/jna-platform-5.4.0.jar && cp
$FLINK_HOME/conf/flink-conf.yaml.tpl
$FLINK_HOME/conf/flink-conf.yaml && $FLINK_HOME/bin/jobmanager.sh
start; while :; do if [[ -f $(find log -name '*jobmanager*.log'
-print -quit) ]]; then tail -f -n +1 log/*jobmanager*.log; fi;
done
workingDir: /opt/flink
ports:
- name: blob
containerPort: 6124
protocol: TCP
- name: rpc
containerPort: 6123
protocol: TCP
- name: ui
containerPort: 8081
protocol: TCP
- name: metrics
containerPort: 9999
protocol: TCP
env:
- name: JVM_ARGS
value: '-Djava.security.properties=/opt/flink/conf/security.properties'
- name: FLINK_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: APOLLO_META
valueFrom:
configMapKeyRef:
name: pro-config
key: apollo.meta
- name: ENV
valueFrom:
configMapKeyRef:
name: pro-config
key: env
resources: {}
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf/flink-conf.yaml.tpl
subPath: flink-conf.yaml.tpl
- name: flink-config-volume
mountPath: /opt/flink/conf/log4j.properties
subPath: log4j.properties
- name: flink-config-volume
mountPath: /opt/flink/conf/security.properties
subPath: security.properties
- name: flink-pro-persistent-storage
mountPath: /opt/flink/data/
livenessProbe:
tcpSocket:
port: 6124
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 15
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 20
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: jobmanager
serviceAccount: jobmanager
securityContext: {}
schedulerName: default-scheduler
strategy:
type: Recreate
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 44
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2020-08-19T06:26:56Z'
lastTransitionTime: '2020-08-19T06:26:56Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2020-08-19T06:42:56Z'
lastTransitionTime: '2020-08-19T06:42:56Z'
reason: ProgressDeadlineExceeded
message: >-
ReplicaSet "report-flink-jobmanager-7b8b9bd6bb" has timed out
progressing.
should I remove the not exists jar file? how to fix this?
I have been trying to create a POD with HELM UPGRADE:
helm upgrade --values=$(System.DefaultWorkingDirectory)/_NAME-deploy-CI/drop/values-NAME.yaml --namespace sda-NAME-pro --install --reset-values --debug --wait NAME .
but running into below error:
2020-07-08T12:51:28.0678161Z upgrade.go:367: [debug] warning: Upgrade "NAME" failed: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
YML part
volumeMounts:
- name: secretvol
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: jks
secret:
secretName: {{ .Values.secret.jks }}
- name: secretvol
secret:
secretName: {{ .Values.secret.secretvol }}
Maybe, the first deploy need another command the first time? how can I specify these value to test it?
TL;DR
The issue you've encountered:
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
is connected with the fact that the variable: {{ .Values.secret.secretvol }} is missing.
To fix it you will need to set this value in either:
Helm command that you are using
File that stores your values in the Helm's chart.
A tip!
You can run your Helm command with --debug --dry-run to output generated YAML's. This should show you where the errors could be located.
There is an official documentation about values in Helm. Please take a look here:
Helm.sh: Docs: Chart template guid: Values files
Basing on
I have been trying to create a POD with HELM UPGRADE:
I've made an example basing on your issue and how you can fix it.
Steps:
Create a helm chart with correct values
Edit the values to reproduce the error
Create a helm chart
For the simplicity of the setup I created basic Helm chart.
Below is the structure of files and directories:
❯ tree helm-dir
helm-dir
├── Chart.yaml
├── templates
│ └── pod.yaml
└── values.yaml
1 directory, 3 files
Create Chart.yaml file
Below is the Chart.yaml file:
apiVersion: v2
name: helm-pod
description: A Helm chart for spawning pod with volumeMount
version: 0.1.0
Create a values.yaml file
Below is the simple values.yaml file which will be used by default in the $ helm install command
usedImage: ubuntu
confidentialName: secret-password # name of the secret in Kubernetes
Create a template for a pod
This template is stored in templates directory with a name pod.yaml
Below YAML definition will be a template for spawned pod:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.usedImage }} # value from "values.yaml"
labels:
app: {{ .Values.usedImage }} # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: {{ .Values.usedImage }} # value from "values.yaml"
image: {{ .Values.usedImage }} # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: {{ .Values.confidentialName }} # value from "values.yaml"
With above example you should be able to run $ helm install --name test-pod .
You should get output similar to this:
NAME: test-pod
LAST DEPLOYED: Thu Jul 9 14:47:46 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 0s
Disclaimer!
The ubuntu pod is in the ContainerCreating state as there is no secret named secret-password in the cluster.
You can get more information about your pods by running:
$ kubectl describe pod POD_NAME
Edit the values to reproduce the error
The error you got as described earlier is most probably connected with the fact that the value: {{ .Values.secret.secretvol }} was missing.
If you were to edit the values.yaml file to:
usedImage: ubuntu
# confidentialName: secret-password # name of the secret in Kubernetes
Notice the added #.
You should get below error when trying to deploy this chart:
Error: release test-pod failed: Pod "ubuntu" is invalid: [spec.volumes[0].secret.secretName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "secretvol"]
I previously mentioned the --debug --dry-run parameters for Helm.
If you run:
$ helm install --name test-pod --debug --dry-run .
You should get the output similar to this (this is only the part):
---
# Source: helm-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu # value from "values.yaml"
labels:
app: ubuntu # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: ubuntu # value from "values.yaml"
image: ubuntu # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: # value from "values.yaml"
As you can see the value of secretName was missing. That's the reason above error was showing up.
secretName: # value from "values.yaml"
Thank you Dawik, here we have the output:
2020-07-10T11:34:26.3090526Z LAST DEPLOYED: Fri Jul 10 11:34:25 2020
2020-07-10T11:34:26.3091661Z NAMESPACE: sda-NAME
2020-07-10T11:34:26.3092410Z STATUS: pending-upgrade
2020-07-10T11:34:26.3092796Z REVISION: 13
2020-07-10T11:34:26.3093182Z TEST SUITE: None
2020-07-10T11:34:26.3093781Z USER-SUPPLIED VALUES:
2020-07-10T11:34:26.3105880Z affinity: {}
2020-07-10T11:34:26.3106801Z containers:
2020-07-10T11:34:26.3107446Z port: 8080
2020-07-10T11:34:26.3108124Z portName: http
2020-07-10T11:34:26.3108769Z protocol: TCP
2020-07-10T11:34:26.3109440Z env:
2020-07-10T11:34:26.3110613Z APP_NAME: NAME
2020-07-10T11:34:26.3112959Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3115219Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3116160Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3116974Z TZ: Europe/Madrid
2020-07-10T11:34:26.3117647Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3119640Z spring_application_name: NAME
2020-07-10T11:34:26.3121048Z spring_cloud_config_uri: URI
2020-07-10T11:34:26.3122038Z envSecrets: {}
2020-07-10T11:34:26.3122789Z fullnameOverride: ""
2020-07-10T11:34:26.3123489Z image:
2020-07-10T11:34:26.3124470Z pullPolicy: Always
2020-07-10T11:34:26.3125908Z repository: NAME-REPO
2020-07-10T11:34:26.3126955Z imagePullSecrets: []
2020-07-10T11:34:26.3127675Z ingress:
2020-07-10T11:34:26.3128727Z enabled: ***
2020-07-10T11:34:26.3129509Z livenessProbe: {}
2020-07-10T11:34:26.3130143Z nameOverride: ""
2020-07-10T11:34:26.3131148Z nameSpace: sda-NAME
2020-07-10T11:34:26.3131820Z nodeSelector: {}
2020-07-10T11:34:26.3132444Z podSecurityContext: {}
2020-07-10T11:34:26.3133135Z readinessProbe: {}
2020-07-10T11:34:26.3133742Z replicaCount: 1
2020-07-10T11:34:26.3134636Z resources:
2020-07-10T11:34:26.3135362Z limits:
2020-07-10T11:34:26.3135865Z cpu: 150m
2020-07-10T11:34:26.3136404Z memory: 1444Mi
2020-07-10T11:34:26.3137257Z requests:
2020-07-10T11:34:26.3137851Z cpu: 100m
2020-07-10T11:34:26.3138391Z memory: 1024Mi
2020-07-10T11:34:26.3138942Z route:
2020-07-10T11:34:26.3139486Z alternateBackends: []
2020-07-10T11:34:26.3140087Z annotations: null
2020-07-10T11:34:26.3140642Z enabled: true
2020-07-10T11:34:26.3141226Z fullnameOverride: ""
2020-07-10T11:34:26.3142695Z host:HOST-NAME
2020-07-10T11:34:26.3143480Z labels: null
2020-07-10T11:34:26.3144217Z nameOverride: ""
2020-07-10T11:34:26.3145137Z path: ""
2020-07-10T11:34:26.3145637Z service:
2020-07-10T11:34:26.3146439Z name: NAME
2020-07-10T11:34:26.3147049Z targetPort: http
2020-07-10T11:34:26.3147607Z weight: 100
2020-07-10T11:34:26.3148121Z status: ""
2020-07-10T11:34:26.3148623Z tls:
2020-07-10T11:34:26.3149162Z caCertificate: null
2020-07-10T11:34:26.3149820Z certificate: null
2020-07-10T11:34:26.3150467Z destinationCACertificate: null
2020-07-10T11:34:26.3151091Z enabled: true
2020-07-10T11:34:26.3151847Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3152483Z key: null
2020-07-10T11:34:26.3153032Z termination: edge
2020-07-10T11:34:26.3154104Z wildcardPolicy: None
2020-07-10T11:34:26.3155687Z secret:
2020-07-10T11:34:26.3156714Z jks: NAME-jks
2020-07-10T11:34:26.3157408Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3157962Z securityContext: {}
2020-07-10T11:34:26.3158490Z service:
2020-07-10T11:34:26.3159127Z containerPort: 8080
2020-07-10T11:34:26.3159627Z port: 8080
2020-07-10T11:34:26.3160103Z portName: http
2020-07-10T11:34:26.3160759Z targetPort: 8080
2020-07-10T11:34:26.3161219Z type: ClusterIP
2020-07-10T11:34:26.3161694Z serviceAccount:
2020-07-10T11:34:26.3162482Z create: ***
2020-07-10T11:34:26.3162990Z name: null
2020-07-10T11:34:26.3163451Z tolerations: []
2020-07-10T11:34:26.3163836Z
2020-07-10T11:34:26.3164534Z COMPUTED VALUES:
2020-07-10T11:34:26.3165022Z affinity: {}
2020-07-10T11:34:26.3165474Z containers:
2020-07-10T11:34:26.3165931Z port: 8080
2020-07-10T11:34:26.3166382Z portName: http
2020-07-10T11:34:26.3166861Z protocol: TCP
2020-07-10T11:34:26.3167284Z env:
2020-07-10T11:34:26.3168046Z APP_NAME: NAME
2020-07-10T11:34:26.3169887Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3175782Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3176587Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3177184Z TZ: Europe/Madrid
2020-07-10T11:34:26.3177683Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3178559Z spring_application_name: NAME
2020-07-10T11:34:26.3179807Z spring_cloud_config_uri: https://URL
2020-07-10T11:34:26.3181055Z envSecrets: {}
2020-07-10T11:34:26.3181569Z fullnameOverride: ""
2020-07-10T11:34:26.3182077Z image:
2020-07-10T11:34:26.3182707Z pullPolicy: Always
2020-07-10T11:34:26.3184026Z repository: REPO
2020-07-10T11:34:26.3185001Z imagePullSecrets: []
2020-07-10T11:34:26.3185461Z ingress:
2020-07-10T11:34:26.3186215Z enabled: ***
2020-07-10T11:34:26.3186709Z livenessProbe: {}
2020-07-10T11:34:26.3187187Z nameOverride: ""
2020-07-10T11:34:26.3188416Z nameSpace: sda-NAME
2020-07-10T11:34:26.3189008Z nodeSelector: {}
2020-07-10T11:34:26.3189522Z podSecurityContext: {}
2020-07-10T11:34:26.3190056Z readinessProbe: {}
2020-07-10T11:34:26.3190552Z replicaCount: 1
2020-07-10T11:34:26.3191030Z resources:
2020-07-10T11:34:26.3191686Z limits:
2020-07-10T11:34:26.3192320Z cpu: 150m
2020-07-10T11:34:26.3192819Z memory: 1444Mi
2020-07-10T11:34:26.3193319Z requests:
2020-07-10T11:34:26.3193797Z cpu: 100m
2020-07-10T11:34:26.3194463Z memory: 1024Mi
2020-07-10T11:34:26.3194975Z route:
2020-07-10T11:34:26.3195470Z alternateBackends: []
2020-07-10T11:34:26.3196028Z enabled: true
2020-07-10T11:34:26.3196556Z fullnameOverride: ""
2020-07-10T11:34:26.3197601Z host: HOST-NAME
2020-07-10T11:34:26.3198314Z nameOverride: ""
2020-07-10T11:34:26.3198828Z path: ""
2020-07-10T11:34:26.3199285Z service:
2020-07-10T11:34:26.3200023Z name: NAME
2020-07-10T11:34:26.3233791Z targetPort: http
2020-07-10T11:34:26.3234697Z weight: 100
2020-07-10T11:34:26.3235283Z status: ""
2020-07-10T11:34:26.3235819Z tls:
2020-07-10T11:34:26.3236787Z enabled: true
2020-07-10T11:34:26.3237479Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3238168Z termination: edge
2020-07-10T11:34:26.3238800Z wildcardPolicy: None
2020-07-10T11:34:26.3239421Z secret:
2020-07-10T11:34:26.3240502Z jks: NAME-servers-jks
2020-07-10T11:34:26.3241249Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3241901Z securityContext: {}
2020-07-10T11:34:26.3242534Z service:
2020-07-10T11:34:26.3243157Z containerPort: 8080
2020-07-10T11:34:26.3243770Z port: 8080
2020-07-10T11:34:26.3244543Z portName: http
2020-07-10T11:34:26.3245190Z targetPort: 8080
2020-07-10T11:34:26.3245772Z type: ClusterIP
2020-07-10T11:34:26.3246343Z serviceAccount:
2020-07-10T11:34:26.3247308Z create: ***
2020-07-10T11:34:26.3247993Z tolerations: []
2020-07-10T11:34:26.3248511Z
2020-07-10T11:34:26.3249065Z HOOKS:
2020-07-10T11:34:26.3249600Z MANIFEST:
2020-07-10T11:34:26.3250504Z ---
2020-07-10T11:34:26.3252176Z # Source: NAME/templates/service.yaml
2020-07-10T11:34:26.3253107Z apiVersion: v1
2020-07-10T11:34:26.3253715Z kind: Service
2020-07-10T11:34:26.3254487Z metadata:
2020-07-10T11:34:26.3255338Z name: NAME
2020-07-10T11:34:26.3256318Z namespace: sda-NAME
2020-07-10T11:34:26.3256883Z labels:
2020-07-10T11:34:26.3257666Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3258533Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3259785Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3260503Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3261383Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3261955Z spec:
2020-07-10T11:34:26.3262427Z type: ClusterIP
2020-07-10T11:34:26.3263292Z ports:
2020-07-10T11:34:26.3264086Z - port: 8080
2020-07-10T11:34:26.3264659Z targetPort: 8080
2020-07-10T11:34:26.3265359Z protocol: TCP
2020-07-10T11:34:26.3265900Z name: http
2020-07-10T11:34:26.3266361Z selector:
2020-07-10T11:34:26.3267220Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3268298Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3269380Z ---
2020-07-10T11:34:26.3270539Z # Source: NAME/templates/deployment.yaml
2020-07-10T11:34:26.3271606Z apiVersion: apps/v1
2020-07-10T11:34:26.3272400Z kind: Deployment
2020-07-10T11:34:26.3273326Z metadata:
2020-07-10T11:34:26.3274457Z name: NAME
2020-07-10T11:34:26.3275511Z namespace: sda-NAME
2020-07-10T11:34:26.3276177Z labels:
2020-07-10T11:34:26.3277219Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3278322Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3279447Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3280249Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3281398Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3282289Z spec:
2020-07-10T11:34:26.3282881Z replicas: 1
2020-07-10T11:34:26.3283505Z selector:
2020-07-10T11:34:26.3284469Z matchLabels:
2020-07-10T11:34:26.3285628Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3286815Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3287549Z template:
2020-07-10T11:34:26.3288192Z metadata:
2020-07-10T11:34:26.3288826Z labels:
2020-07-10T11:34:26.3289909Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3291596Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3292439Z spec:
2020-07-10T11:34:26.3293109Z serviceAccountName: default
2020-07-10T11:34:26.3293774Z securityContext:
2020-07-10T11:34:26.3294666Z {}
2020-07-10T11:34:26.3295217Z containers:
2020-07-10T11:34:26.3296338Z - name: NAME
2020-07-10T11:34:26.3297240Z securityContext:
2020-07-10T11:34:26.3297859Z {}
2020-07-10T11:34:26.3299353Z image: "REGISTRY-IMAGE"
2020-07-10T11:34:26.3300638Z imagePullPolicy: Always
2020-07-10T11:34:26.3301358Z ports:
2020-07-10T11:34:26.3302491Z - name:
2020-07-10T11:34:26.3303380Z containerPort: 8080
2020-07-10T11:34:26.3304479Z protocol: TCP
2020-07-10T11:34:26.3305325Z env:
2020-07-10T11:34:26.3306418Z - name: APP_NAME
2020-07-10T11:34:26.3307576Z value: "NAME"
2020-07-10T11:34:26.3308757Z - name: JAVA_OPTS_EXT
2020-07-10T11:34:26.3311974Z value: "-Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit"
2020-07-10T11:34:26.3313760Z - name: SPRING_CLOUD_CONFIG_PROFILE
2020-07-10T11:34:26.3314842Z value: "pro"
2020-07-10T11:34:26.3315890Z - name: TZ
2020-07-10T11:34:26.3316777Z value: "Europe/Madrid"
2020-07-10T11:34:26.3317863Z - name: WILY_MOM_PORT
2020-07-10T11:34:26.3318485Z value: "5001"
2020-07-10T11:34:26.3319421Z - name: spring_application_name
2020-07-10T11:34:26.3320679Z value: "NAME"
2020-07-10T11:34:26.3321858Z - name: spring_cloud_config_uri
2020-07-10T11:34:26.3323093Z value: "https://config.sda-NAME-pro.svc.cluster.local"
2020-07-10T11:34:26.3324190Z resources:
2020-07-10T11:34:26.3324905Z limits:
2020-07-10T11:34:26.3325439Z cpu: 150m
2020-07-10T11:34:26.3325985Z memory: 1444Mi
2020-07-10T11:34:26.3326739Z requests:
2020-07-10T11:34:26.3327305Z cpu: 100m
2020-07-10T11:34:26.3327875Z memory: 1024Mi
2020-07-10T11:34:26.3328436Z volumeMounts:
2020-07-10T11:34:26.3329476Z - name: jks
2020-07-10T11:34:26.3330147Z mountPath: "/etc/jks"
2020-07-10T11:34:26.3331153Z readOnly: true
2020-07-10T11:34:26.3332053Z - name: jssecacerts
2020-07-10T11:34:26.3332739Z mountPath: "/etc/truststore"
2020-07-10T11:34:26.3333356Z readOnly: true
2020-07-10T11:34:26.3334402Z - name: secretvol
2020-07-10T11:34:26.3335565Z mountPath: "/etc/secret-vol"
2020-07-10T11:34:26.3336302Z readOnly: true
2020-07-10T11:34:26.3336935Z volumes:
2020-07-10T11:34:26.3338100Z - name: jks
2020-07-10T11:34:26.3338724Z secret:
2020-07-10T11:34:26.3339946Z secretName: NAME-servers-jks
2020-07-10T11:34:26.3340817Z - name: secretvol
2020-07-10T11:34:26.3341347Z secret:
2020-07-10T11:34:26.3341870Z secretName:
2020-07-10T11:34:26.3342633Z - name: jssecacerts
2020-07-10T11:34:26.3343444Z secret:
2020-07-10T11:34:26.3344103Z secretName: jssecacerts
2020-07-10T11:34:26.3344866Z ---
2020-07-10T11:34:26.3345846Z # Source: NAME/templates/route.yaml
2020-07-10T11:34:26.3346641Z apiVersion: route.openshift.io/v1
2020-07-10T11:34:26.3347112Z kind: Route
2020-07-10T11:34:26.3347568Z metadata:
2020-07-10T11:34:26.3354831Z name: NAME
2020-07-10T11:34:26.3357144Z labels:
2020-07-10T11:34:26.3358020Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3359360Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3360306Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3361002Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3361888Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3362463Z spec:
2020-07-10T11:34:26.3363374Z host: HOST
2020-07-10T11:34:26.3364364Z path:
2020-07-10T11:34:26.3364940Z wildcardPolicy: None
2020-07-10T11:34:26.3365630Z port:
2020-07-10T11:34:26.3366080Z targetPort: http
2020-07-10T11:34:26.3366496Z tls:
2020-07-10T11:34:26.3367144Z termination: edge
2020-07-10T11:34:26.3367630Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3368072Z to:
2020-07-10T11:34:26.3368572Z kind: Service
2020-07-10T11:34:26.3369571Z name: NAME
2020-07-10T11:34:26.3369919Z weight: 100
2020-07-10T11:34:26.3370115Z status:
2020-07-10T11:34:26.3370287Z ingress: []
2020-07-10T11:34:26.3370419Z
2020-07-10T11:34:26.3370579Z NOTES:
2020-07-10T11:34:26.3370833Z 1. Get the application URL by running these commands:
2020-07-10T11:34:26.3371698Z export POD_NAME=$(kubectl get pods --namespace sda-NAME -l "app.kubernetes.io/name=NAME,app.kubernetes.io/instance=NAME" -o jsonpath="{.items[0].metadata.name}")
2020-07-10T11:34:26.3372278Z echo "Visit http://127.0.0.1:8080 to use your application"
2020-07-10T11:34:26.3373358Z kubectl --namespace sda-NAME port-forward $POD_NAME 8080:80
2020-07-10T11:34:26.3373586Z
2020-07-10T11:34:26.3385047Z ##[section]Finishing: Helm Install/Upgrade NAME
looks well and don`t show any error...but if make it without --dry-run crash in the same part...
On the other hand, I try it without this volume and secret...and works perfect! I don't understand it.
Thank you for your patience and guidance.
UPDATE & FIX:
finally, the problem was in the file values-NAME.yml:
secret:
jks: VALUE
jssecacerts: VALUE
it need the following line in secret:
secretvol: VALUE