Invalid host. To browse Nexus, click here/. To use the Docker registry, point your client at when access to nexus in kubernetes - kubernetes

I am using helm 3 to install nexus in kubernetes v1.18:
helm install stable/sonatype-nexus --name=nexus
and then expose nexus by using traefik 2.x to outside by using domian: nexus.dolphin.com. But when I using domain to access nexus servcie it give me this tips:
Invalid host. To browse Nexus, click here/. To use the Docker registry, point your client
and I have read this question, but It seem not suite for my situation. And this is my nexus yaml config now:
kind: Deployment
apiVersion: apps/v1
metadata:
name: nexus-sonatype-nexus
namespace: infrastructure
selfLink: /apis/apps/v1/namespaces/infrastructure/deployments/nexus-sonatype-nexus
uid: 023de15b-19eb-442d-8375-11532825919d
resourceVersion: '1710210'
generation: 3
creationTimestamp: '2020-08-16T07:17:07Z'
labels:
app: sonatype-nexus
app.kubernetes.io/managed-by: Helm
chart: sonatype-nexus-1.23.1
fullname: nexus-sonatype-nexus
heritage: Helm
release: nexus
annotations:
deployment.kubernetes.io/revision: '1'
meta.helm.sh/release-name: nexus
meta.helm.sh/release-namespace: infrastructure
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: apps/v1
time: '2020-08-16T07:17:07Z'
fieldsType: FieldsV1
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2020-08-18T16:26:34Z'
fieldsType: FieldsV1
spec:
replicas: 1
selector:
matchLabels:
app: sonatype-nexus
release: nexus
template:
metadata:
creationTimestamp: null
labels:
app: sonatype-nexus
release: nexus
spec:
volumes:
- name: nexus-sonatype-nexus-data
persistentVolumeClaim:
claimName: nexus-sonatype-nexus-data
- name: nexus-sonatype-nexus-backup
emptyDir: {}
containers:
- name: nexus
image: 'sonatype/nexus3:3.20.1'
ports:
- name: nexus-docker-g
containerPort: 5003
protocol: TCP
- name: nexus-http
containerPort: 8081
protocol: TCP
env:
- name: install4jAddVmParams
value: >-
-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: 'false'
resources: {}
volumeMounts:
- name: nexus-sonatype-nexus-data
mountPath: /nexus-data
- name: nexus-sonatype-nexus-backup
mountPath: /nexus-data/backup
livenessProbe:
httpGet:
path: /
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 30
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 30
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: nexus-proxy
image: 'quay.io/travelaudience/docker-nexus-proxy:2.5.0'
ports:
- name: nexus-proxy
containerPort: 8080
protocol: TCP
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: GoogleHC
- name: CLOUD_IAM_AUTH_ENABLED
value: 'false'
- name: BIND_PORT
value: '8080'
- name: ENFORCE_HTTPS
value: 'false'
- name: NEXUS_DOCKER_HOST
- name: NEXUS_HTTP_HOST
- name: UPSTREAM_DOCKER_PORT
value: '5003'
- name: UPSTREAM_HTTP_PORT
value: '8081'
- name: UPSTREAM_HOST
value: localhost
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: nexus-sonatype-nexus
serviceAccount: nexus-sonatype-nexus
securityContext:
fsGroup: 2000
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 3
replicas: 1
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
conditions:
- type: Progressing
status: 'True'
lastUpdateTime: '2020-08-18T16:23:54Z'
lastTransitionTime: '2020-08-18T16:23:54Z'
reason: NewReplicaSetAvailable
message: >-
ReplicaSet "nexus-sonatype-nexus-79fd4488d5" has successfully
progressed.
- type: Available
status: 'True'
lastUpdateTime: '2020-08-18T16:26:34Z'
lastTransitionTime: '2020-08-18T16:26:34Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
why the domian could not access nexus by default? and what should I do to access nexus by domain?

From the documentation you should set a property of the helm chart: nexusProxy.env.nexusHttpHost to nexus.dolphin.com
The docker image used here has a proxy that allows you to access the Nexus HTTP and Nexus Docker services by different domains, if you don't specify either then you get the behaviour you're seeing.

Related

Helm chart upgrade with different Docker image tag

I have a question about a Helm upgrade. I'm working on a helm chart which deploys a pod with a docker image tag:2.190.1-alpine.
Release name is jenkins.
I want change the docker image in the helm chart to 2.240
I launch:
helm upgrade jenkins codecentric/jenkins --set image.tag=2.2
But I got error:
Error: UPGRADE FAILED: unable to decode "": resource.metadataOnlyObject.ObjectMeta: v1.ObjectMeta.Labels: ReadString: expects " or n, but found 2, error found in #10 byte of ...|version":2.240,"helm.|..., bigger context ...|s.io/name":"jenkins","app.kubernetes.io/version":2.240,"helm.sh/chart":"jenkins-1.7.0"},"name":"myReleases|...
Has anyone had experience with this?
Here is a deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-02-06T22:39:51Z"
generation: 1
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: jenkins
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.5.1
name: jenkins
namespace: default
resourceVersion: "28697012"
selfLink: /apis/apps/v1/namespaces/default/deployments/jenkins
uid: 9934dea3-b3a1-4109-b725-da5410aacc5f
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: jenkins
app.kubernetes.io/name: jenkins
strategy:
type: Recreate
template:
metadata:
annotations:
checksum/init: e43addd28e27d8ab36052943e5586515d252a27c3ddd85e15c43e99501ab1c0b
checksum/ref: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
creationTimestamp: null
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: jenkins
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.5.1
spec:
containers:
- env:
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: JAVA_OPTS
value: -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.model.LoadStatistics.decay=0.7
-Dhudson.slaves.NodeProvisioner.MARGIN=30 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.6
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2
-XshowSettings:vm
image: jenkins/jenkins:2.190.1-alpine
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: jenkins-master
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: agent
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /usr/share/jenkins/ref/plugins
name: jenkins-plugins
dnsPolicy: ClusterFirst
initContainers:
- args:
- jenkins/jenkins
- 2.190.1-alpine
- "false"
command:
- /init/init.sh
image: jenkins/jenkins:2.190.1-alpine
imagePullPolicy: IfNotPresent
name: jenkins-init
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /init
name: init
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /usr/share/jenkins/ref/plugins
name: jenkins-plugins
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
- emptyDir: {}
name: jenkins-plugins
- configMap:
defaultMode: 365
name: jenkins-init
name: init
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-02-06T22:39:51Z"
lastUpdateTime: "2020-02-06T22:40:32Z"
message: ReplicaSet "jenkins-866f548f55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-06-05T12:13:56Z"
lastUpdateTime: "2020-06-05T12:13:56Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Is it safe to edit deployment file directly?
I noticed that the ERROR code listed version chart 1.7.0 even though mine is 1.5.1. Maybe I should state in the upgrade command that I stay with the old version?

how to make traefik to bind host server's 80 and 443 port when using deployment type

I am using traefik 2.2.1 as my cluster's entrypoint using deployment type, this is my deployment config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
namespace: kube-system
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/traefik
uid: ddee327d-8570-44be-ab8d-06cb440187f4
resourceVersion: '335024'
generation: 12
creationTimestamp: '2020-06-04T07:37:20Z'
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-8.2.1
annotations:
deployment.kubernetes.io/revision: '7'
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
spec:
replicas: 4
selector:
matchLabels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-8.2.1
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: traefik
image: 'traefik:2.2.1'
args:
- '--global.checknewversion'
- '--global.sendanonymoususage'
- '--entryPoints.traefik.address=:9000'
- '--entryPoints.web.address=:80'
- '--entryPoints.websecure.address=:443'
- '--api.dashboard=true'
- '--ping=true'
- '--providers.kubernetescrd'
- '--providers.kubernetesingress'
ports:
- name: traefik
containerPort: 9000
protocol: TCP
- name: web
containerPort: 8000
protocol: TCP
- name: websecure
containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- name: data
mountPath: /data
livenessProbe:
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
runAsUser: 65532
runAsGroup: 65532
runAsNonRoot: true
readOnlyRootFilesystem: true
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
serviceAccountName: traefik
serviceAccount: traefik
securityContext:
fsGroup: 65532
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 12
replicas: 5
updatedReplicas: 2
readyReplicas: 3
availableReplicas: 3
unavailableReplicas: 2
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2020-06-04T08:41:03Z'
lastTransitionTime: '2020-06-04T08:41:03Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2020-06-04T10:57:35Z'
lastTransitionTime: '2020-06-04T10:48:40Z'
reason: ReplicaSetUpdated
message: ReplicaSet "traefik-dd74b59b" is progressing.
my question is: is it possible to make the treafik listening host's 80 and 443 port? If possible, how to make it? or should I change my deployment type to daemon set? if not, I have to deployment a nginx in each node to forward traffic.
Add hostNetwork: true in the spec. This makes the pod use host's network namespace.
...
spec:
hostNetwork: true
containers:
- name: traefik
...

Kubernetes Dashboard Ingress returning empty response from server

I am trying to set up the kubernetes dashboard. I have enabled the custom ssl certs from my domain and can curl the pod directly with no issues - i can curl the service and it works with no issues. However, when I try to access via ingress I get (52) empty response from server. I have an NLB forwarding to the port of nginx controller service (ingress works fine with another app). Here is my ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
labels:
app: dashboard
name: dashboard-ingress
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: k8sdash.domain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
Here is the Daemonset config for my ingress controllers.
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "3"
creationTimestamp: "2020-05-19T15:48:13Z"
generation: 3
labels:
app: lb
app.kubernetes.io/component: controller
chart: nginx-ingress-1.36.3
heritage: Tiller
release: lb
name: lb-controller
namespace: kube-system
resourceVersion: "747622"
selfLink: /apis/apps/v1/namespaces/kube-system/daemonsets/lb-controller
uid: 19d830ba-f2d9-4c6f-bc8d-d64667a900c7
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: lb
release: lb
template:
metadata:
creationTimestamp: null
labels:
app: lb
app.kubernetes.io/component: controller
component: controller
release: lb
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/lb-default-backend
- --publish-service=kube-system/lb-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/lb-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: lb-controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: lb
serviceAccountName: lb
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 3

need to apply the prometheus rules from alertmanager repo in k8s

Have 2 gitlab repos:
=> gitlab a
=> gitlab b
gitlab a - contains stateful set and pod of prometheus and prometheus pushgateway
gitlab b - conatins the alertmanager service and alermanager pod and prometheus rules.
all the pods and containers are up and running.
am trying to apply the prometheus-rules to the prometheus stateful set.
prometheusRule.png
need to apply the Kind:prometheus rule to stateful set of prometheus.
can someone help.
applied rules yaml :
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
prometheus: k8s
role: alert-rules
name: prometheus-k8s-rules
namespace: cmp-monitoring
spec:
groups:
- name: node-exporter.rules
rules:
- expr: |
count without (cpu) (
count without (mode) (
node_cpu_seconds_total{job="node-exporter"}
)
)
record: instance:node_num_cpu:sum
- expr: |
1 - avg without (cpu, mode) (
rate(node_cpu_seconds_total{job="node-exporter", mode="idle"}[1m])
)
record: instance:node_cpu_utilisation:rate1m
prometheus-statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: prometheus
labels:
app: prometheus
spec:
selector:
matchLabels:
app: prometheus
serviceName: prometheus
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
terminationGracePeriodSeconds: 10
containers:
- name: prometheus
image: prom/prometheus
imagePullPolicy: Always
ports:
- name: http
containerPort: 9090
volumeMounts:
- name: prometheus-config
mountPath: "/etc/prometheus/prometheus.yml"
subPath: prometheus.yml
- name: prometheus-data
mountPath: "/prometheus"
#- name: rules-general
# mountPath: "/etc/prometheus/prometheus.rules.yml"
# subPath: prometheus.rules.yml
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
initialDelaySeconds: 120
periodSeconds: 40
successThreshold: 1
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /-/healthy
port: 9090
initialDelaySeconds: 120
periodSeconds: 40
successThreshold: 1
timeoutSeconds: 10
failureThreshold: 3
securityContext:
fsGroup: 1000
volumes:
- name: prometheus-config
configMap:
name: prometheus-server-conf
#- name: rules-general
# configMap:
# name: prometheus-server-conf
volumeClaimTemplates:
- metadata:
name: prometheus-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rbd-default
resources:
requests:
storage: 10Gi

Kubernetes JSON Deployment patch definition

I am trying to patch a deployment, using below command.I trying to set environment variable in the pod by passing existing value from the metadata label definition of the my yaml.
Patching Command
kubectl patch deployment deploytest3 --type=json -p='[{"op": "add", "path": "/spec/template/spec/containers/1/env/2","value": {"name": "CUSTOMERID","valueFrom": {"fieldRef": {"fieldPath": "metadata.labels['customerid']"}}}}]'
Below is my Yaml for reference.(This is for reference not the original)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "10"
creationTimestamp: 2019-01-22T11:24:17Z
generation: 10
labels:
app: my-cloud
name: deploytest3
namespace: default
resourceVersion: "60076984"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/deploytest3
uid: 41950c24-1e38-11e9-ba0c-42010a8000d6
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: my-cloud
customerid: "gx0388d"
customername: deploytest3
environment: dev
tier: backend
version: 11.0-latest
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: my-cloud
customerid: "gx0388d"
customername: deploytest3
environment: dev
tier: backend
version: 11.0-latest
spec:
containers:
- args:
- -c
- /bin/aSoftMS --simple-media-server;/bin/core_find.sh
command:
- /bin/bash
image: us.gcr.io/data/myms:11.0-latest
imagePullPolicy: IfNotPresent
name: ms
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/latest/system_rw/
name: deploytest3
- args:
- -c
command:
- /bin/bash
image: us.gcr.io/latest/core:11.0-latest
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/CoreDumpGen
livenessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: User-Agent
value: DeploymentAdmin
path: /livenessstatus
port: 8443
scheme: HTTPS
initialDelaySeconds: 60
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
env:
- name: latest_CUSTOMER
value: abc-test
- name: latest
value: abc
image: us.gcr.io/latest/latest/core2:11.0-latest
imagePullPolicy: IfNotPresent
name: core2
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostname: deploytest3
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
volumes:
- name: bucket-auth
secret:
defaultMode: 256
secretName: bucket-auth
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-01-22T11:24:17Z
lastUpdateTime: 2019-01-22T11:24:17Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-01-22T11:24:17Z
lastUpdateTime: 2019-05-31T13:49:03Z
message: ReplicaSet "deploytest3-7ddbc9b9c4" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Below is error, i am getting while executing the above shared command
The Deployment "deploytest3" is invalid: spec.template.spec.containers[2].env[2].valueFrom.fieldRef.fieldPath: Invalid value: "metadata.labels[customerid]": error converting fieldPath:
field label not supported: metadata.labels[customerid]
I am manually add the environment variable in the deployment it works.
I think i am not setting correct with json format, Can someone help me.
"fieldPath": "metadata.labels['customerid']" - is incorrect
Apparently, it should be "fieldPath": "spec.template.metadata.labels['customerid']"
This is not exactly kubectl patch, but this will also set env variable from deployments label definition
kubectl set env deployment/deploytest3 CUSTOMERID=$(kubectl get deployments deploytest3 -o jsonpath='{$.spec.template.metadata.labels.customerid}')