I have setup running some log generator with loki and logstash. Grafana is able to identify the datasource and labels are picking, but the log generator logs are coming under grafana labels. What iam doing wrong here.
---
# Source: logstash/templates/poddisruptionbudget.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: "logstash-logstash-pdb"
labels:
app: "logstash-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash"
spec:
maxUnavailable: 1
selector:
matchLabels:
app: "logstash-logstash"
---
# Source: logstash/templates/configmap-pipeline.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-logstash-pipeline
labels:
app: "logstash-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash"
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
file {
path => ["/var/log/*.log"]
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
filter {
if [kubernetes] {
mutate {
add_field => {
"container_name" => "%{[kubernetes][container][name]}"
"namespace" => "%{[kubernetes][namespace]}"
"pod" => "%{[kubernetes][pod][name]}"
}
replace => { "host" => "%{[kubernetes][node][name]}"}
}
}
mutate {
remove_field => ["tags"]
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}
---
# Source: logstash/templates/service-headless.yaml
kind: Service
apiVersion: v1
metadata:
name: "logstash-logstash-headless"
labels:
app: "logstash-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash"
spec:
clusterIP: None
selector:
app: "logstash-logstash"
ports:
- name: http
port: 9600
---
# Source: logstash/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-logstash
labels:
app: "logstash-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash"
spec:
serviceName: logstash-logstash-headless
selector:
matchLabels:
app: "logstash-logstash"
release: "logstash"
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
name: "logstash-logstash"
labels:
app: "logstash-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash"
annotations:
pipelinechecksum: e5576a55d691ae22c1da1204f1e548e8aa936dc6415af52eb65699f5a155bb8
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "logstash-logstash"
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 120
volumes:
- name: logstashpipeline
configMap:
name: logstash-logstash-pipeline
containers:
- name: "logstash"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
image: "grafana/logstash-output-loki:1.0.1"
imagePullPolicy: "IfNotPresent"
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
ports:
- name: http
containerPort: 9600
resources:
limits:
cpu: 1000m
memory: 1536Mi
requests:
cpu: 100m
memory: 1536Mi
env:
- name: LS_JAVA_OPTS
value: "-Xmx1g -Xms1g"
- name: XPACK_MONITORING_ENABLED
value: "false"
volumeMounts:
- name: logstashpipeline
mountPath: /usr/share/logstash/pipeline/logstash.conf
subPath: logstash.conf
You can try adding these include fields in logtash configuration which should help you reslove the issue.
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
include_fields => ["container_name","namespace","pod","host"]
}
}
Related
I'm managing a kubernetes cluster and there is a duplicate pod that keeps coming back, but the duplicate ReplicaSet controlling it also keeps coming back after deletion. It's very strange. I also can't set the replica set to desire 0 pods, but that might be by design.
I can't really think of more information to share.
Anyone recognise this issue and know how to fix it?
Here's the ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: cash-idemix-ca-6bc646cbcc
namespace: boc-cash-portal
uid: babc0236-2053-4088-b8e8-b8ae2ed9303c
resourceVersion: '25466073'
generation: 1
creationTimestamp: '2022-11-28T13:18:42Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
deployment.kubernetes.io/revision: '7'
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: cash-idemix-ca
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
controller: true
blockOwnerDeletion: true
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-28T13:18:42Z'
fieldsType: FieldsV1
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-29T13:27:37Z'
fieldsType: FieldsV1
subresource: status
selfLink: >-
/apis/apps/v1/namespaces/boc-cash-portal/replicasets/cash-idemix-ca-6bc646cbcc
status:
replicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
Edit: The Fool was correct, there is a deployment that recreates the the ReplicaSet. Though in the settings it seems to say that it only want to create 1 replica; so I still don't see why it wants to create two of them.
I'm using Lens to manage my cluster, and it shows that the desired number of replicas is indeed 2. I can set it to 1, but the change won't persist. Anything else where I could look?
cash-idemix-ca Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cash-idemix-ca
namespace: boc-cash-portal
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
resourceVersion: '25467341'
generation: 10
creationTimestamp: '2022-07-18T14:13:57Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cash-idemix-ca
app.kubernetes.io/version: manual-0.2.1
helm.sh/chart: cash-idemix-ca-0.12.0
annotations:
deployment.kubernetes.io/revision: '7'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"cash-idemix-ca","app.kubernetes.io/version":"manual-0.2.1","helm.sh/chart":"cash-idemix-ca-0.12.0"},"name":"cash-idemix-ca","namespace":"boc-cash-portal"},"spec":{"replicas":1,"revisionHistoryLimit":0,"selector":{"matchLabels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"template":{"metadata":{"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"spec":{"containers":[{"args":["-c","sleep
1 \u0026\u0026 fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel
debug"],"command":["sh"],"env":[{"name":"FABRIC_CA_SERVER_HOME","value":"/idemix-config/fabric-ca-gurvy"},{"name":"IDEMIX_ADMIN_USERNAME","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_USERNAME","name":"cash-idemix-ca-admin-credentials"}}},{"name":"IDEMIX_ADMIN_PASSWORD","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_PASSWORD","name":"cash-idemix-ca-admin-credentials"}}}],"image":"ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0","imagePullPolicy":"IfNotPresent","name":"cash-idemix-ca","ports":[{"containerPort":7054,"name":"api","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/idemix-config/fabric-ca-gurvy","name":"fabric-ca-server-home","readOnly":false}]}],"imagePullSecrets":[{"name":"all-icr-io"}],"serviceAccountName":"cash-idemix-ca","volumes":[{"name":"fabric-ca-server-home","persistentVolumeClaim":{"claimName":"cash-idemix-ca-fabric-ca-server-home"}}]}}}}
status:
observedGeneration: 10
replicas: 2
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2022-11-28T13:53:56Z'
lastTransitionTime: '2022-11-28T13:53:56Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2022-11-29T13:37:38Z'
lastTransitionTime: '2022-11-29T13:37:38Z'
reason: ProgressDeadlineExceeded
message: ReplicaSet "cash-idemix-ca-6bc646cbcc" has timed out progressing.
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 0
progressDeadlineSeconds: 600
We deploy service with helm. The ingress template looks like that:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ui-app-ingress
{{- with .Values.ingress.annotations}}
annotations:
{{- toYaml . | nindent 4}}
{{- end}}
spec:
rules:
- host: {{ .Values.ingress.hostname }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ include "ui-app-chart.fullname" . }}
port:
number: 80
tls:
- hosts:
- {{ .Values.ingress.hostname }}
secretName: {{ .Values.ingress.certname }}
as you can see, we already use networking.k8s.io/v1 but if i watch the treafik logs, i find this error:
1 reflector.go:138] pkg/mod/k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
what results in tls cert error:
time="2022-06-07T15:40:35Z" level=debug msg="Serving default certificate for request: \"example.de\""
time="2022-06-07T15:40:35Z" level=debug msg="http: TLS handshake error from 10.1.0.4:57484: remote error: tls: unknown certificate"
time="2022-06-07T15:40:35Z" level=debug msg="Serving default certificate for request: \"example.de\""
time="2022-06-07T15:53:06Z" level=debug msg="Serving default certificate for request: \"\""
time="2022-06-07T16:03:31Z" level=debug msg="Serving default certificate for request: \"<ip-adress>\""
time="2022-06-07T16:03:32Z" level=debug msg="Serving default certificate for request: \"<ip-adress>\""
PS C:\WINDOWS\system32>
i already found out that networking.k8s.io/v1beta1 is not longer served, but networking.k8s.io/v1 was defined in the template all the time as ApiVersion.
Why does it still try to get from v1beta1? And how can i fix this?
We use this TLSOptions:
apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
name: default
namespace: default
spec:
minVersion: VersionTLS12
maxVersion: VersionTLS13
cipherSuites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
we use helm-treafik rolled out with terraform:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: traefik
creationTimestamp: "2021-06-12T10:06:11Z"
generation: 2
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-9.19.1
name: traefik
namespace: traefik
resourceVersion: "86094434"
uid: 903a6f54-7698-4290-bc59-d234a191965c
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-9.19.1
spec:
containers:
- args:
- --global.checknewversion
- --global.sendanonymoususage
- --entryPoints.traefik.address=:9000/tcp
- --entryPoints.web.address=:8000/tcp
- --entryPoints.websecure.address=:8443/tcp
- --api.dashboard=true
- --ping=true
- --providers.kubernetescrd
- --providers.kubernetesingress
- --providers.file.filename=/etc/traefik/traefik.yml
- --accesslog=true
- --accesslog.format=json
- --log.level=DEBUG
- --entrypoints.websecure.http.tls
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --entrypoints.web.http.redirections.entrypoint.permanent=true
- --entrypoints.web.http.redirections.entrypoint.to=:443
image: traefik:2.4.8
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
name: traefik
ports:
- containerPort: 9000
name: traefik
protocol: TCP
- containerPort: 8000
name: web
protocol: TCP
- containerPort: 8443
name: websecure
protocol: TCP
readinessProbe:
failureThreshold: 1
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
resources: {}
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /tmp
name: tmp
- mountPath: /etc/traefik
name: traefik-cm
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65532
serviceAccount: traefik
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: env
operator: Equal
value: conhub
volumes:
- emptyDir: {}
name: data
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: traefik-cm
name: traefik-cm
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-07T09:19:58Z"
lastUpdateTime: "2022-06-07T09:19:58Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-06-12T10:06:11Z"
lastUpdateTime: "2022-06-07T16:39:01Z"
message: ReplicaSet "traefik-84c6f5f98b" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
resource "helm_release" "traefik" {
name = "traefik"
namespace = "traefik"
create_namespace = true
repository = "https://helm.traefik.io/traefik"
chart = "traefik"
set {
name = "service.spec.loadBalancerIP"
value = azurerm_public_ip.pub_ip.ip_address
}
set {
name = "service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group"
value = var.resource_group_aks
}
set {
name = "additionalArguments"
value = "{--accesslog=true,--accesslog.format=json,--log.level=DEBUG,--entrypoints.websecure.http.tls,--entrypoints.web.http.redirections.entrypoint.to=websecure,--entrypoints.web.http.redirections.entrypoint.scheme=https,--entrypoints.web.http.redirections.entrypoint.permanent=true,--entrypoints.web.http.redirections.entrypoint.to=:443}"
}
set {
name = "deployment.replicas"
value = 3
}
timeout = 600
depends_on = [
azurerm_kubernetes_cluster.aks
]
}
I found out that the problem was the version of the traefik image:
i quick fixed it by setting the latest image:
kubectl set image deployment/traefik traefik=traefik:2.7.0 -n traefik
I'm trying to run local Airflow instance on my laptop using minikube, deployment.yml file with the following command: kubectl apply -f ./deployment.yml.
After slightly tweaking this file I was able to end up with all three pods: postgres, webserver, scheduler running fine.
The result of the kubectl get pods
The content of the file:
---
# Source: airflow/templates/rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: airflow/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
password: "**************"
# We don't auto-generate LDAP password when it's not provided as we do for other passwords
---
# Source: airflow/templates/config/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
airflow-password: "*************"
# Airflow keys must be base64-encoded, hence we need to pipe to 'b64enc' twice
# The auto-generation mechanism available at "common.secrets.passwords.manage" isn't compatible with encoding twice
# Therefore, we can only use this function if the secret already exists
airflow-fernet-key: "TldwdU0zRklTREZ0VDFkamVWUjFaMlozWTFKdWNFNUxTRXRxVm5Oa1p6az0="
airflow-secret-key: "VldWaWQySkhSVUZQZDNWQlltbG1UVzUzVkdwWmVVTkxPR1ZCZWpoQ05tUT0="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-dependencies
namespace: "default"
data:
requirements.txt: |-
apache-airflow==2.2.3
pytest==6.2.4
python-slugify<5.0
funcy==1.16
apache-airflow-providers-mongo
apache-airflow-providers-postgres
apache-airflow-providers-slack
apache-airflow-providers-amazon
airflow_clickhouse_plugin
apache-airflow-providers-sftp
surveymonkey-python
---
# Source: airflow/templates/rbac/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
- ""
resources:
- "pods"
verbs:
- "create"
- "list"
- "get"
- "watch"
- "delete"
- "patch"
- apiGroups:
- ""
resources:
- "pods/log"
verbs:
- "get"
- apiGroups:
- ""
resources:
- "pods/exec"
verbs:
- "create"
- "get"
---
# Source: airflow/templates/rbac/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: release-name-airflow
subjects:
- kind: ServiceAccount
name: release-name-airflow
namespace: default
---
# Source: airflow/charts/postgresql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql-hl
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: ClusterIP
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Postgresql pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/charts/postgresql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
type: ClusterIP
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
nodePort: null
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/templates/web/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
spec:
type: NodePort
ports:
- name: http
port: 8080
nodePort: 30303
selector:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
---
# Source: airflow/templates/scheduler/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-scheduler
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-scheduler
image: "docker.io/bitnami/airflow-scheduler:2.2.3-debian-10-r57"
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: release-name-airflow
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
resources:
limits: {}
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/templates/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-web
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-web
image: docker.io/bitnami/airflow:2.2.3-debian-10-r62
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: "0.0.0.0"
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW_USERNAME
value: airflow
- name: AIRFLOW_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-password
- name: AIRFLOW_BASE_URL
value: "http://127.0.0.1:8080"
- name: AIRFLOW_LDAP_ENABLE
value: "no"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
ports:
- name: http
containerPort: 8080
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 180
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
resources:
limits:
cpu: "2"
memory: 4Gi
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/charts/postgresql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
replicas: 1
serviceName: release-name-postgresql-hl
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
template:
metadata:
name: release-name-postgresql
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
serviceAccountName: default
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: postgresql
image: docker.io/bitnami/postgresql:14.1.0-debian-10-r80
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_VOLUME_DIR
value: "/bitnami/postgresql"
- name: PGDATA
value: "/bitnami/postgresql/data"
# Authentication
- name: POSTGRES_USER
value: "bn_airflow"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: POSTGRES_DB
value: "bitnami_airflow"
# Replication
# Initdb
# Standby
# LDAP
- name: POSTGRESQL_ENABLE_LDAP
value: "no"
# TLS
- name: POSTGRESQL_ENABLE_TLS
value: "no"
# Audit
- name: POSTGRESQL_LOG_HOSTNAME
value: "false"
- name: POSTGRESQL_LOG_CONNECTIONS
value: "false"
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: "false"
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: "off"
# Others
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: "error"
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: "pgaudit"
ports:
- name: tcp-postgresql
containerPort: 5432
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- -e
- |
exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
resources:
limits: {}
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
volumes:
- name: dshm
emptyDir:
medium: Memory
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
The idea is that after successful deployment I would be able to access webserver UI through the localhost:30303, but can't for some reason. It feels like there should be a minor change to fix it...
For now what I've tried is to connect to the webserver pod: kubectl exec -it <webserver pod name> -- /bin/bash and run two commands airflow db init and airflow web server -p 8080.
I'm attempting to use the Statistics Gathering Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: http://logstash.monitoring-observability:9000. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:
2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:
jenkins#jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
I also get the following error in the logstash logs:
[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
Here is my jenkins-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
Here is my jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
Here is my logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Here is my logstash-service.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
Here is my logstash configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port 5044 and get the same results. It seems as though my logstash instance is not actually listening on the containerPort. Why might this be?
I resolved this error by updating the configmap to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
Note that all references to the jenkins host have been removed.
Have a Next.js project.
This is my next.config.js file, which I followed through with on this guide: https://dev.to/tesh254/environment-variables-from-env-file-in-nextjs-570b
module.exports = withCSS(withSass({
webpack: (config) => {
config.plugins = config.plugins || []
config.module.rules.push({
test: /\.svg$/,
use: ['#svgr/webpack', {
loader: 'url-loader',
options: {
limit: 100000,
name: '[name].[ext]'
}}],
});
config.plugins = [
...config.plugins,
// Read the .env file
new Dotenv({
path: path.join(__dirname, '.env'),
systemvars: true
})
]
const env = Object.keys(process.env).reduce((acc, curr) => {
acc[`process.env.${curr}`] = JSON.stringify(process.env[curr]);
return acc;
}, {});
// Fixes npm packages that depend on `fs` module
config.node = {
fs: 'empty'
}
/** Allows you to create global constants which can be configured
* at compile time, which in our case is our environment variables
*/
config.plugins.push(new webpack.DefinePlugin(env));
return config
}
}),
);
I have a .env file which holds the values I need. It works when run on localhost.
In my Kubernetes environment, within the deploy file which I can modify, I have the same environment variables set up. But when I try and identify them they come off as undefined, so my application cannot run.
I refer to it like:
process.env.SOME_VARIABLE
which works locally.
Does anyone have experience making environment variables function on Next.js when deployed? Not as simple as it is for a backend service. :(
EDIT:
This is what the environment variable section looks like.
EDIT 2:
Full deploy file, edited to remove some details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
.env Works in docker or docker-compose, they do not work in Kubernetes, if you want to add them you can by configmaps objects or add directly to each deployment an example (from documentation):
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow
Also, the best and standard way is to use config maps, for example:
containers:
- env:
- name: DB_DEFAULT_DATABASE
valueFrom:
configMapKeyRef:
key: DB_DEFAULT_DATABASE
name: darwined-env
And the config map:
apiVersion: v1
data:
DB_DEFAULT_DATABASE: darwined_darwin_dev_1
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: darwin-env
name: darwined-env
Hope this helps.