replicaset connections requires - kubernetes

currently i'm trying to set up countly application on kubernetes.
I have a problem that I can't understand. maybe it's because I've been in it too long and I don't have enough distance.
When my pods (frontend & api) countly start I get the error below :
ERROR [db:read] Error reading apps {"name":"find","args":[{}]} MongoError: seed list
contains no mongos proxies, replicaset connections requires the parameter replicaSet to be supplied in the URI or options object, mongodb://server:port/db?replicaSet=name {"name":"MongoError"}
No apps to upgrade
I checked on the mongodb website how to set up the replicaset string connection
At the bottom I put the result of the command kubectl get all to see the service name and pod name.
I've tried a lot of different ways, but every time I've made a mistake.
Deployment.yaml :
# Source: countly-chart/templates/frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-frontend
labels:
name: RELEASE-NAME-frontend
app: RELEASE-NAME
product: dgt
environment: dev
version: HEAD
component: frontend
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
name: RELEASE-NAME-frontend
app: RELEASE-NAME
product: dgt
environment: dev
version: HEAD
component: frontend
template:
metadata:
labels:
name: RELEASE-NAME-frontend
app: RELEASE-NAME
product: dgt
environment: dev
version: HEAD
component: frontend
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb
- mongodb
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 10
containers:
- name: countly-frontend
image: countly/frontend:19.08.1
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
imagePullPolicy: Always
ports:
- containerPort: 6001
readinessProbe:
httpGet:
path: /ping
port: 6001
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
env:
- name: COUNTLY_PLUGINS
value: "mobile,web,desktop,density,locale,browser,sources,views,enterpriseinfo,logger,systemlogs,errorlogs,populator,reports,crashes,push,star-rating,slipping-away-users,compare,server-stats,dbviewer,assistant,plugin-upload,times-of-day,compliance-hub,video-intelligence-monetization,alerts,onboarding"
- name: COUNTLY_CONFIG_API_FILESTORAGE
value: "gridfs"
- name: COUNTLY_CONFIG_API_MONGODB
value: "mongodb://mongodb-0.mongodb:27017,mongodb-1.mongodb:27017/countly?replicaSet=rs0"
- name: COUNTLY_CONFIG_FRONTEND_MONGODB
value: "mongodb://mongodb-0.mongodb:27017,mongodb-1.mongodb:27017/countly?replicaSet=rs0"
- name: COUNTLY_CONFIG_HOSTNAME
value: dgt-iointc-countly.xxx.xx
Service.yaml :
# Source: countly-chart/templates/frontend.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-frontend
labels:
app: RELEASE-NAME
product: dgt
environment: dev
version: HEAD
component: frontend
spec:
type: NodePort
ports:
- port: 6001
targetPort: 6001
protocol: TCP
selector:
app: RELEASE-NAME
component: frontend
ApiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-04-09T15:06:27Z"
generation: 7
labels:
app: mongodb
component: mongo
environment: iointc
product: mongodb
name: mongodb
namespace: dgt-iointc
resourceVersion: "510868664"
selfLink: /apis/apps/v1/namespaces/dgt-iointc/statefulsets/mongodb
uid: af9eb929-7a73-11ea-a219-0679473f89f6
spec:
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 1
selector:
matchLabels:
app: mongodb
component: mongo
serviceName: mongodb
template:
metadata:
creationTimestamp: null
labels:
app: mongodb
component: mongo
environment: iointc
product: mongodb
spec:
containers:
- command:
- mongod
- --replSet
- rs0
- --bind_ip_all
env:
- name: MONGODB_REPLICA_SET_NAME
value: rs0
image: docker.xxx.xx/run/mongo:3.6.12
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- mongo
- --eval
- db.adminCommand('ping')
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: mongo
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- mongo
- --eval
- db.adminCommand('ping')
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1000Mi
requests:
cpu: 50m
memory: 1000Mi
securityContext:
runAsUser: 999
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data/db
name: mongo-storage
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 10
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: mongo-storage
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 600Mi
storageClassName: nfs-external
status:
phase: Pending
Kubectl get all
NAME READY STATUS RESTARTS AGE
pod/countly-api-7b878d6fc5-ktlmg 0/1 Running 17 46m
pod/countly-frontend-6c4fc86c96-g47hx 0/1 Running 0 39m
pod/mongodb-0 1/1 Running 0 5h19m
pod/mongodb-1 1/1 Running 0 5h19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/countly-api NodePort 10.80.11.164 <none> 3001:32612/TCP 37h
service/countly-frontend NodePort 10.80.104.62 <none> 6001:31176/TCP 37h
service/mongodb ClusterIP None <none> 27017/TCP 18h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/countly-api 1 1 1 0 12h
deployment.apps/countly-frontend 1 1 1 0 12h
NAME DESIRED CURRENT READY AGE
replicaset.apps/countly-api-7b878d6fc5 1 1 0 46m
replicaset.apps/countly-frontend-6c4fc86c96 1 1 0 39m
NAME DESIRED CURRENT AGE
statefulset.apps/mongodb 2 2 18h

Related

Unable to add a K8s service as prometheus target

I want my prometheus server to scrape metrics from a pod.
I followed these steps:
Created a pod using deployment - kubectl apply -f sample-app.deploy.yaml
Exposed the same using kubectl apply -f sample-app.service.yaml
Deployed Prometheus server using helm upgrade -i prometheus prometheus-community/prometheus -f prometheus-values.yaml
created a serviceMonitor using kubectl apply -f service-monitor.yaml to add a target for prometheus.
All pods are running, but when I open prometheus dashboard, I don't see sample-app service as prometheus target, under status>targets in dashboard UI.
I've verified following:
I can see sample-app when I execute kubectl get servicemonitors
I can see sample-app exposes metrics in prometheus format under at /metrics
At this point I debugged further, entered into the prometheus pod using
kubectl exec -it pod/prometheus-server-65b759cb95-dxmkm -c prometheus-server sh
, and saw that proemetheus configuration (/etc/config/prometheus.yml) didn't have sample-app as one of the jobs so I edited the configmap using
kubectl edit cm prometheus-server -o yaml
Added
- job_name: sample-app
static_configs:
- targets:
- sample-app:8080
Assuming all other fields such as scraping interval, scrape_timeout stays default.
I can see the same has been reflected in /etc/config/prometheus.yml, but still prometheus dashboard doesn't show sample-app as targets under status>targets.
following are yamls for prometheus-server and service monitor.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
autopilot.gke.io/resource-adjustment: '{"input":{"containers":[{"name":"prometheus-server-configmap-reload"},{"name":"prometheus-server"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"prometheus-server-configmap-reload"},{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"prometheus-server"}]},"modified":true}'
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: prom
creationTimestamp: "2021-06-24T10:42:31Z"
generation: 1
labels:
app: prometheus
app.kubernetes.io/managed-by: Helm
chart: prometheus-14.2.1
component: server
heritage: Helm
release: prometheus
name: prometheus-server
namespace: prom
resourceVersion: "6983855"
selfLink: /apis/apps/v1/namespaces/prom/deployments/prometheus-server
uid: <some-uid>
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: prometheus
component: server
release: prometheus
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: prometheus
chart: prometheus-14.2.1
component: server
heritage: Helm
release: prometheus
spec:
containers:
- args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
image: jimmidyson/configmap-reload:v0.5.0
imagePullPolicy: IfNotPresent
name: prometheus-server-configmap-reload
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
readOnly: true
- args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/healthy
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 10
name: prometheus-server
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 4
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /data
name: storage-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
serviceAccount: prometheus-server
serviceAccountName: prometheus-server
terminationGracePeriodSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: prometheus-server
name: config-volume
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-06-24T10:43:25Z"
lastUpdateTime: "2021-06-24T10:43:25Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-06-24T10:42:31Z"
lastUpdateTime: "2021-06-24T10:43:25Z"
message: ReplicaSet "prometheus-server-65b759cb95" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
yaml for service Monitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2021-06-24T07:55:58Z","generation":1,"labels":{"app":"sample-app","release":"prometheus"},"name":"sample-app","namespace":"prom","resourceVersion":"6884573","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/prom/servicemonitors/sample-app","uid":"34644b62-eb4f-4ab1-b9df-b22811e40b4c"},"spec":{"endpoints":[{"port":"http"}],"selector":{"matchLabels":{"app":"sample-app","release":"prometheus"}}}}
creationTimestamp: "2021-06-24T07:55:58Z"
generation: 2
labels:
app: sample-app
release: prometheus
name: sample-app
namespace: prom
resourceVersion: "6904642"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/prom/servicemonitors/sample-app
uid: <some-uid>
spec:
endpoints:
- port: http
selector:
matchLabels:
app: sample-app
release: prometheus
You need to use the prometheus-community/kube-prometheus-stack chart, which includes the Prometheus operator, in order to have Prometheus' configuration update automatically based on ServiceMonitor resources.
The prometheus-community/prometheus chart you used does not include the Prometheus operator that watches for ServiceMonitor resources in the Kubernetes API and updates the Prometheus server's ConfigMap accordingly.
It seems that you have the necessary CustomResourceDefinitions (CRDs) installed in your cluster, otherwise you would not have been able to create a ServiceMonitor resource. These are not included in the prometheus-community/prometheus chart so perhaps they were added to your cluster previously.

HPA showing unknown in k8s

I configured HPA using a command as shown below
kubectl autoscale deployment isamruntime-v1 --cpu-percent=20 --min=1 --max=3 --namespace=default
horizontalpodautoscaler.autoscaling/isamruntime-v1 autoscaled
However, the HPA cannot identify the CPU load.
pranam#UNKNOWN kubernetes % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
isamruntime-v1 Deployment/isamruntime-v1 <unknown>/20% 1 3 0 3s
I read a number of articles which suggested installing metrics server. So, I did that.
pranam#UNKNOWN kubernetes % kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader configured
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server configured
deployment.apps/metrics-server configured
service/metrics-server configured
clusterrole.rbac.authorization.k8s.io/system:metrics-server configured
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server configured
I can see the metrics server.
pranam#UNKNOWN kubernetes % kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7d88b45844-lz8zw 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-bsx6p 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
calico-node-g229m 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
calico-node-slwrh 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-tztjg 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
coredns-7d6bb98ccc-d8nrs 1/1 Running 0 25d 172.30.93.205 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-n28dm 1/1 Running 0 25d 172.30.93.204 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-zx5jx 1/1 Running 0 25d 172.30.93.197 10.164.27.28 <none> <none>
coredns-autoscaler-848db65fc6-lnfvf 1/1 Running 0 25d 172.30.93.201 10.164.27.28 <none> <none>
dashboard-metrics-scraper-576c46d9bd-k6z85 1/1 Running 0 25d 172.30.93.195 10.164.27.28 <none> <none>
ibm-file-plugin-7c57965855-494bz 1/1 Running 0 22d 172.30.93.216 10.164.27.28 <none> <none>
ibm-iks-cluster-autoscaler-7df84fb95c-fhtgv 1/1 Running 0 2d23h 172.30.137.98 10.164.27.46 <none> <none>
ibm-keepalived-watcher-9w4gb 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-keepalived-watcher-ps5zm 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-keepalived-watcher-rzxbs 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-keepalived-watcher-w6mxb 1/1 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.28 2/2 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.39 2/2 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-master-proxy-static-10.164.27.44 2/2 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-master-proxy-static-10.164.27.46 2/2 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-storage-watcher-67466b969f-ps55m 1/1 Running 0 22d 172.30.93.217 10.164.27.28 <none> <none>
kubernetes-dashboard-c6b4b9d77-27zwb 1/1 Running 2 22d 172.30.93.218 10.164.27.28 <none> <none>
metrics-server-79d847cf58-6frsf 2/2 Running 0 3m23s 172.30.93.226 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-88vl5 4/4 Running 0 11h 172.30.93.225 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-vx68d 4/4 Running 0 11h 172.30.137.104 10.164.27.46 <none> <none>
vpn-58b48cdc7c-4lp9c 1/1 Running 0 25d 172.30.93.193 10.164.27.28 <none> <none>
I am using Istio and sysdig. Not sure if that breaks anything. My k8s versions are shown below.
pranam#UNKNOWN kubernetes % kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7+IKS", GitCommit:"3305158dfe9ee1f89f596ef260135dcba881848c", GitTreeState:"clean", BuildDate:"2020-06-17T18:32:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
My YAML file is
#Assumes create-docker-store-secret.sh used to create dockerlogin secret
#Assumes create-secrets.sh used to create key file, sam admin, and cfgsvc secrets
apiVersion: storage.k8s.io/v1beta1
# Create StorageClass with gidallocate=true to allow non-root user access to mount
# This is used by PostgreSQL container
kind: StorageClass
metadata:
name: ibmc-file-bronze-gid
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-file
parameters:
type: "Endurance"
iopsPerGB: "2"
sizeRange: "[1-12000]Gi"
mountOptions: nfsvers=4.1,hard
billingType: "hourly"
reclaimPolicy: "Delete"
classVersion: "2"
gidAllocate: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldaplib
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapslapd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapsecauthority
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresqldata
spec:
storageClassName: ibmc-file-bronze-gid
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: isamconfig
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
selector:
matchLabels:
app: openldap
replicas: 1
template:
metadata:
labels:
app: openldap
spec:
volumes:
- name: ldaplib
persistentVolumeClaim:
claimName: ldaplib
- name: ldapslapd
persistentVolumeClaim:
claimName: ldapslapd
- name: ldapsecauthority
persistentVolumeClaim:
claimName: ldapsecauthority
- name: openldap-keys
secret:
secretName: openldap-keys
containers:
- name: openldap
image: ibmcom/isam-openldap:9.0.7.0
ports:
- containerPort: 636
env:
- name: LDAP_DOMAIN
value: ibm.com
- name: LDAP_ADMIN_PASSWORD
value: Passw0rd
- name: LDAP_CONFIG_PASSWORD
value: Passw0rd
volumeMounts:
- mountPath: /var/lib/ldap
name: ldaplib
- mountPath: /etc/ldap/slapd.d
name: ldapslapd
- mountPath: /var/lib/ldap.secAuthority
name: ldapsecauthority
- mountPath: /container/service/slapd/assets/certs
name: openldap-keys
# This line is needed when running on Kubernetes 1.9.4 or above
args: [ "--copy-service"]
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
# Just this line to get debug output from openldap startup
# args: [ "--loglevel" , "trace","--copy-service"]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: openldap
labels:
app: openldap
spec:
ports:
- port: 636
name: ldaps
protocol: TCP
selector:
app: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
selector:
matchLabels:
app: postgresql
replicas: 1
template:
metadata:
labels:
app: postgresql
spec:
securityContext:
runAsNonRoot: true
runAsUser: 70
fsGroup: 0
volumes:
- name: postgresqldata
persistentVolumeClaim:
claimName: postgresqldata
- name: postgresql-keys
secret:
secretName: postgresql-keys
containers:
- name: postgresql
image: ibmcom/isam-postgresql:9.0.7.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Passw0rd
- name: POSTGRES_DB
value: isam
- name: POSTGRES_SSL_KEYDB
value: /var/local/server.pem
- name: PGDATA
value: /var/lib/postgresql/data/db-files/
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresqldata
- mountPath: /var/local
name: postgresql-keys
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
name: postgresql
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamconfig
labels:
app: isamconfig
spec:
selector:
matchLabels:
app: isamconfig
replicas: 1
template:
metadata:
labels:
app: isamconfig
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
persistentVolumeClaim:
claimName: isamconfig
- name: isamconfig-logs
emptyDir: {}
containers:
- name: isamconfig
image: ibmcom/isam:9.0.7.1_IF4
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamconfig-logs
env:
- name: SERVICE
value: config
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: ADMIN_PWD
valueFrom:
secretKeyRef:
name: samadmin
key: adminpw
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 120
periodSeconds: 20
# command: [ "/sbin/bootstrap.sh" ]
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamconfig
spec:
# To make the LMI internet facing, make it a NodePort
type: NodePort
ports:
- port: 9443
name: isamconfig
protocol: TCP
# make this one statically allocated
nodePort: 30442
selector:
app: isamconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v1
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v1
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v2
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v2
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrprp1
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrprp1
protocol: TCP
nodePort: 30443
selector:
app: isamwrprp1
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrpmobile
labels:
app: isamwrpmobile
spec:
selector:
matchLabels:
app: isamwrpmobile
replicas: 1
template:
metadata:
labels:
app: isamwrpmobile
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrpmobile-logs
emptyDir: {}
containers:
- name: isamwrpmobile
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrpmobile-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: mobile
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrpmobile
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrpmobile
protocol: TCP
nodePort: 30444
selector:
app: isamwrpmobile
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v1
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v1
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v2
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v2
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: isamruntime
protocol: TCP
selector:
app: isamruntime
---
I am not sure why the CPU load is shown as unknown. Have I missed a step or made any mistake ? Can someone help ?
Regards
Pranam
Based on the issue shown, it appears that you have not set the resource limits in the delployment.yaml file.
if you go for executing kubectl explain deployment then you will see in containers specs -
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
If you add values to above mentioned keys then surely the hpa issue will get solved
Did you specify the resources block when you defined your app's deployment?
I don't remembered where it was stated but I encountered this case once when I forgot that.
More information about managing resources for containers: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

how to make traefik to bind host server's 80 and 443 port when using deployment type

I am using traefik 2.2.1 as my cluster's entrypoint using deployment type, this is my deployment config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
namespace: kube-system
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/traefik
uid: ddee327d-8570-44be-ab8d-06cb440187f4
resourceVersion: '335024'
generation: 12
creationTimestamp: '2020-06-04T07:37:20Z'
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-8.2.1
annotations:
deployment.kubernetes.io/revision: '7'
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
spec:
replicas: 4
selector:
matchLabels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-8.2.1
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: traefik
image: 'traefik:2.2.1'
args:
- '--global.checknewversion'
- '--global.sendanonymoususage'
- '--entryPoints.traefik.address=:9000'
- '--entryPoints.web.address=:80'
- '--entryPoints.websecure.address=:443'
- '--api.dashboard=true'
- '--ping=true'
- '--providers.kubernetescrd'
- '--providers.kubernetesingress'
ports:
- name: traefik
containerPort: 9000
protocol: TCP
- name: web
containerPort: 8000
protocol: TCP
- name: websecure
containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- name: data
mountPath: /data
livenessProbe:
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
runAsUser: 65532
runAsGroup: 65532
runAsNonRoot: true
readOnlyRootFilesystem: true
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
serviceAccountName: traefik
serviceAccount: traefik
securityContext:
fsGroup: 65532
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 12
replicas: 5
updatedReplicas: 2
readyReplicas: 3
availableReplicas: 3
unavailableReplicas: 2
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2020-06-04T08:41:03Z'
lastTransitionTime: '2020-06-04T08:41:03Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2020-06-04T10:57:35Z'
lastTransitionTime: '2020-06-04T10:48:40Z'
reason: ReplicaSetUpdated
message: ReplicaSet "traefik-dd74b59b" is progressing.
my question is: is it possible to make the treafik listening host's 80 and 443 port? If possible, how to make it? or should I change my deployment type to daemon set? if not, I have to deployment a nginx in each node to forward traffic.
Add hostNetwork: true in the spec. This makes the pod use host's network namespace.
...
spec:
hostNetwork: true
containers:
- name: traefik
...

Calico: networkPlugin cni failed to set up pod, i/o timeout

I have got an issue with deploy some pods on my k8s node. The error is following:
Failed create pod sandbox: rpc error: code = Unknown desc = failed to
set up sandbox container
"7da8bce09dd6820a65754073b1b4e52e640291dcb82f1da87ae99570c6964d1b"
network for pod "webservices-8675d4667d-7mdf9": networkPlugin cni
failed to set up pod "webservices-8675d4667d-7mdf9_default" network:
Get https://[10.233.0.1]:443/api/v1/namespaces/default: dial tcp
10.233.0.1:443: i/o timeout
However, some pods are deployed, for example kubernetes-dashboard:
Update:
NAME STATUS ROLES AGE VERSION LABELS
k8s-master.mariyo.eu Ready master 3d15h v1.16.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master.mariyo.eu,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node-1.mariyo.eu Ready <none> 3d15h v1.16.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-1.mariyo.eu,kubernetes.io/os=linux
Deployment for coredns:
kind: Deployment
apiVersion: apps/v1
metadata:
name: coredns
namespace: kube-system
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/coredns
uid: bd5451ec-2a33-443d-8519-ffcec935ac0c
resourceVersion: '397508'
generation: 2
creationTimestamp: '2020-01-24T16:14:37Z'
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: coredns
annotations:
deployment.kubernetes.io/revision: '1'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","kubernetes.io/name":"coredns"},"name":"coredns","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"kube-dns"}},"strategy":{"rollingUpdate":{"maxSurge":"10%","maxUnavailable":0},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"seccomp.security.alpha.kubernetes.io/pod":"docker/default"},"labels":{"k8s-app":"kube-dns"}},"spec":{"affinity":{"nodeAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"preference":{"matchExpressions":[{"key":"node-role.kubernetes.io/master","operator":"In","values":[""]}]},"weight":100}]},"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"kube-dns"}},"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"args":["-conf","/etc/coredns/Corefile"],"image":"docker.io/coredns/coredns:1.6.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":10,"httpGet":{"path":"/health","port":8080,"scheme":"HTTP"},"successThreshold":1,"timeoutSeconds":5},"name":"coredns","ports":[{"containerPort":53,"name":"dns","protocol":"UDP"},{"containerPort":53,"name":"dns-tcp","protocol":"TCP"},{"containerPort":9153,"name":"metrics","protocol":"TCP"}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"/ready","port":8181,"scheme":"HTTP"},"successThreshold":1,"timeoutSeconds":5},"resources":{"limits":{"memory":"170Mi"},"requests":{"cpu":"100m","memory":"70Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["all"]},"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/coredns","name":"config-volume"}]}],"dnsPolicy":"Default","nodeSelector":{"beta.kubernetes.io/os":"linux"},"priorityClassName":"system-cluster-critical","serviceAccountName":"coredns","tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"key":"CriticalAddonsOnly","operator":"Exists"}],"volumes":[{"configMap":{"items":[{"key":"Corefile","path":"Corefile"}],"name":"coredns"},"name":"config-volume"}]}}}}
spec:
replicas: 2
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
defaultMode: 420
containers:
- name: coredns
image: 'docker.io/coredns/coredns:1.6.0'
args:
- '-conf'
- /etc/coredns/Corefile
ports:
- name: dns
containerPort: 53
protocol: UDP
- name: dns-tcp
containerPort: 53
protocol: TCP
- name: metrics
containerPort: 9153
protocol: TCP
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: Default
nodeSelector:
beta.kubernetes.io/os: linux
serviceAccountName: coredns
serviceAccount: coredns
securityContext: {}
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- ''
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: kube-dns
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: CriticalAddonsOnly
operator: Exists
priorityClassName: system-cluster-critical
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 10%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 2
replicas: 2
updatedReplicas: 2
readyReplicas: 1
availableReplicas: 1
unavailableReplicas: 1
conditions:
- type: Progressing
status: 'True'
lastUpdateTime: '2020-01-24T16:14:42Z'
lastTransitionTime: '2020-01-24T16:14:37Z'
reason: NewReplicaSetAvailable
message: ReplicaSet "coredns-58687784f9" has successfully progressed.
- type: Available
status: 'False'
lastUpdateTime: '2020-01-27T17:42:57Z'
lastTransitionTime: '2020-01-27T17:42:57Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
Deployment for webservices:
kind: Deployment
apiVersion: apps/v1
metadata:
name: webservices
namespace: default
selfLink: /apis/apps/v1/namespaces/default/deployments/webservices
uid: da75d3d8-92f4-4d06-86d6-e2fb325806a5
resourceVersion: '398529'
generation: 1
creationTimestamp: '2020-01-27T08:05:16Z'
labels:
run: webservices
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 5
selector:
matchLabels:
run: webservices
template:
metadata:
creationTimestamp: null
labels:
run: webservices
spec:
containers:
- name: webservices
image: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 1
replicas: 5
updatedReplicas: 5
unavailableReplicas: 5
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2020-01-27T08:05:16Z'
lastTransitionTime: '2020-01-27T08:05:16Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2020-01-27T17:52:58Z'
lastTransitionTime: '2020-01-27T17:52:58Z'
reason: ProgressDeadlineExceeded
message: ReplicaSet "webservices-8675d4667d" has timed out progressing.
Finally, I decided to reinstall nodes from Debian 10 to Ubuntu 18.04 and everything works as expected.
Thank you for your time
Problem is that kube-proxy isn't functioning correctly as I believe the 10.233.0.1 is the kubernetes api service address which it is responsible for configuring/setting up. You should check kube-proxy logs and see that it is healthy and create the iptables rules for the kubernetes services.
Take a look here: calico-timeout-pod.
I had to set the following on the worker node as well, before joining it, for it to work:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
I was having a similar issue. I am using microk8s in my instance. it seems the node needs to advertise itself to the cluster. I hope it points you in the right direction (repost from github):
microk8s stop
# or for workers: sudo snap stop microk8s
sudo vim.tiny /var/snap/microk8s/current/args/kubelet
# Add this to bottom: --node-ip=<this-specific-node-lan-ip>
sudo vim.tiny /var/snap/microk8s/current/args/kube-apiserver
# Add this to bottom: --advertise-address=<this-specific-node-lan-ip>
microk8s start
# or for workers: sudo snap start microk8s

The l7-default-backend deployment gets reverted when I edit it

I upgraded my GKE clusters to Kubernetes 1.5.6 a couple days ago. I used to be able to scale the l7-default-backend deployment to 3 replicas and increase the CPU resource limits, but now it seems my changes get reverted to their default of 1 replica and 10m CPU limit.
Interestingly enough, when I added the following:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
it was persisted no problem.
Here's the current deployment manifest:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"l7-default-backend","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"glbc","kubernetes.io/cluster-service":"true","kubernetes.io/name":"GLBC"}},"spec":{"replicas":1,"selector":{"matchLabels":{"k8s-app":"glbc"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"k8s-app":"glbc","name":"glbc"}},"spec":{"containers":[{"name":"default-http-backend","image":"gcr.io/google_containers/defaultbackend:1.0","ports":[{"containerPort":8080}],"resources":{"limits":{"cpu":"10m","memory":"20Mi"},"requests":{"cpu":"10m","memory":"20Mi"}},"livenessProbe":{"httpGet":{"path":"/healthz","port":8080,"scheme":"HTTP"},"initialDelaySeconds":30,"timeoutSeconds":5}}]}},"strategy":{}},"status":{}}'
creationTimestamp: 2017-03-23T23:30:12Z
generation: 9
labels:
k8s-app: glbc
kubernetes.io/cluster-service: "true"
kubernetes.io/name: GLBC
name: l7-default-backend
namespace: kube-system
resourceVersion: "40149922"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/l7-default-backend
uid: a9772d26-1020-11e7-b9a8-42010af001d0
spec:
replicas: 1
selector:
matchLabels:
k8s-app: glbc
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: glbc
name: glbc
spec:
containers:
- image: gcr.io/google_containers/defaultbackend:1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: default-http-backend
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2017-04-13T19:19:35Z
lastUpdateTime: 2017-04-13T19:19:35Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 9
replicas: 1
updatedReplicas: 1
How can I scale the l7-default-backend successfully?